Keywords: Interpretability, Hallucination Detection, Hallucinations, Truthfulness, Large Language Models
TL;DR: The paper explores how large language models intrinsically encode truthfulness signals through two distinct pathways—Question-Anchored and Answer-Anchored—to detect hallucinations, revealing their properties and proposing enhanced detection methods.
Abstract: Despite their impressive capabilities, large language models (LLMs) frequently generate hallucinations. Previous work shows that their internal states encode rich signals of truthfulness, yet the origins and mechanisms of these signals remain unclear.
In this paper, we demonstrate that truthfulness cues arise from two distinct information pathways: (1) a Question-Anchored pathway that depends on question–answer information flow, and (2) an Answer-Anchored pathway that derives self-contained evidence from the generated answer itself.
First, we validate and disentangle these pathways through attention knockout and token patching.
Afterwards, we uncover notable and intriguing properties of these two mechanisms and investigate the factors underlying their distinct behaviors.
Further experiments reveal that (1) the two mechanisms are closely associated with LLM knowledge boundaries; (2) internal representations are aware of their distinctions; and (3) there is a clear misalignment between truthfulness encoding and language modeling.
Finally, building on these insightful findings, two applications are proposed to enhance hallucination detection performance.
Overall, our work provides new insight into how LLMs internally encode truthfulness, offering directions for more reliable and self-aware generative systems.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 24428
Loading