Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model; Hallucination;
Abstract: Large language models (LLMs) frequently generate hallucinations—content that deviates from factually inaccurate or deviates from provided context—posing challenges for diagnosis. However, diagnosing the causes of hallucination is challenging due to the complex interplay of underlying causes. This paper introduces a framework to systematically understand the sources of hallucination behavior in large language models. Our key insight is that hallucinations arise when more frequent but non-factual associations outweigh faithful ones. Through theoretical and empirical analyses, we demonstrate that decoder-only transformers effectively function as subsequence embedding models, with the fully-connected layers encoding input-output associations. We propose a tracing algorithm that identifies causal subsequences by analyzing hallucination probabilities across randomized input contexts. Experiments show our method outperforms standard attribution techniques in identifying hallucination causes and is supported by evidence from the model’s training corpus. This work provides a unified perspective on hallucinations and a robust framework for their cause and analysis.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 1927
Loading