Are Hallucinations Bad Estimations?

ICLR 2026 Conference Submission554 Authors

01 Sept 2025 (modified: 23 Dec 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: hallucination
Abstract: We formalize hallucinations in generative models as failures to link an estimate to any plausible cause. Under this interpretation, we show that even loss‑minimizing optimal estimators still hallucinate. We confirm this with a general high probability lower bound on hallucinate rate for generic data distributions. This reframes hallucination as structural misalignment between loss minimization and human‑acceptable outputs, and hence estimation errors induced by miscalibration. Experiments on coin aggregation, open‑ended QA, and text‑to‑image support our theory.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 554
Loading