Auditing Meta-Cognitive Hallucinations in Reasoning Large Language Models

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
Keywords: hallucination, Chain-of-Thought, reasoning, Metacognition
TL;DR: This paper investigates how hallucinations arise and persist in RLLM reasoning, revealing error self-reinforcement and limited metacognition.
Abstract: The development of Reasoning Large Language Models (RLLMs) has significantly improved multi-step reasoning capabilities, but it has also made hallucination problems more frequent and harder to eliminate. While existing approaches address hallucination through external knowledge integration, model parameter analysis, or self-verification mechanisms, they fail to provide a comprehensive insight into how hallucinations **emerge** and **evolve** throughout the reasoning chain. In this work, we investigate hallucination causality under constrained knowledge domains by auditing the Chain-of-Thought (CoT) trajectory and assessing the model's cognitive confidence in potentially erroneous or biased claims. Analysis reveals that in long-CoT settings, RLLMs may iteratively reinforce biases and errors through flawed reflective processes, ultimately inducing hallucinated reasoning paths. Counterintuitively, even with interventions at hallucination origins, reasoning chains display pronounced ''chain disloyalty'', resisting correction and sustaining flawed trajectories. We further point out that existing hallucination detection methods are *less reliable and interpretable than previously assumed*, especially in complex multi-step reasoning contexts. Unlike Anthropic's circuit tracing that requires access to model parameters, our auditing **enables more interpretable long-chain hallucination attribution in black-box settings**, demonstrating stronger generalizability and practical utility. Our code is available at [this link](https://github.com/Winnie-Lian/AHa_Meta_Cognitive).
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 15717
Loading