Keywords: hallucination detection, graph neural networks, LLMs, attention graphs
TL;DR: We propose CHARM, a message-passing neural network that models LLM computations as attributed graphs to detect hallucinations; it integrates attention and activation signals, subsumes and outperforms prior methods across benchmarks and granularities.
Abstract: Large Language Models (LLMs) often generate incorrect or unsupported content, known as hallucinations. Existing detection methods rely on heuristics or simple models over isolated computational traces such as activations, or attention maps. We unify these signals by representing them as attributed graphs, where tokens are nodes, edges follow attentional flows, and both carry features from attention scores and activations. Our approach, CHARM, casts hallucination detection as a graph learning task and tackles it by applying GNNs over the above attributed graphs. We show that CHARM provably subsumes prior attention-based heuristics and, experimentally, it consistently outperforms other leading approaches across diverse benchmarks. Our results shed light on the relevant role played by the graph structure and on the benefits of combining computational traces, whilst showing CHARM exhibits promising zero-shot performance on cross-dataset transfer.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 19174
Loading