Valuable Hallucinations: Realizable Non-Realistic Propositions

ACL ARR 2025 May Submission7152 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper clarifies the specific connotation of beneficial hallucinations in large language models (LLMs), addressing a gap in the existing literature. We provide a systematic definition and analysis of hallucination value, proposing methods for enhancing the value of hallucinations. In contrast to previous works, which often treat hallucinations as a broad flaw, we focus on the potential value that certain types of hallucinations can offer in specific contexts. Hallucinations in LLMs generally refer to the generation of unfaithful, fabricated, inconsistent, or nonsensical content. Rather than viewing all hallucinations negatively, this paper clarifies the specific connotation of valuable hallucinations and explores how realizable non-realistic propositions—ideas that are not currently true but could be achievable under certain conditions—can have constructive value. We evaluate the Qwen-3-0.6B, Qwen2.5-72B-Instruct and DeepSeek-R1-671B models on the HalluQA dataset using ReAct prompting, which incorporates reasoning, confidence assessment, and answer verification to control and optimize hallucinations. ReAct reduces overall hallucinations by 4.67\%, 5.12\% and 8.45\% in Qwen-3-0.6B, Qwen2.5-72B-Instruct and DeepSeek-R1-671B, respectively, while increasing the proportion of valuable hallucinations from 0\% to 4.01\%, from 6.45\% to 7.92\%, and from 1.12\% to 7.84\%. These results suggest that systematically controlling hallucinations can improve their usefulness without compromising factual reliability.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: prompt engineering, reflection techniques, hallucinations
Contribution Types: Model analysis & interpretability
Languages Studied: Chinese,English
Submission Number: 7152
Loading