Keywords: Hallucination detection; Knowledge interpretability; Counterfactual intervention; Causal graph; LLM reliability
Abstract: Despite the groundbreaking advancements made by large language models (LLMs), hallucination remains a critical bottleneck for their deployment in high-stakes domains. Existing classification-based methods mainly rely on static and passive signals from internal states, which often captures the noise and spurious correlations, while overlooking the underlying causal mechanisms. To address this limitation, we shift the paradigm from passive observation to active intervention by introducing CausalGaze, a novel hallucination detection framework based on structural causal models (SCMs). CausalGaze models LLMs' internal states as dynamic causal graphs and employs counterfactual interventions to disentangle causal reasoning paths from incidental noise, thereby enhancing model interpretability. Extensive experiments across four datasets and three widely used LLMs demonstrate the effectiveness of CausalGaze, especially achieving over 5.2\% improvement in AUROC on the TruthfulQA dataset compared to state-of-the-art baselines.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: graph-based methods; causality
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 1451
Loading