COMPASS: Context-Modulated PID Attention Steering System for Hallucination Mitigation

Published: 10 Jan 2026, Last Modified: 10 Jan 2026LaMAS 2026 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models (LLMs), Contextual Hallucination, Attention Modulation, PID Control Feedback Loop, Factual Accuracy, Real-Time Intervention, Cross-Model Evaluation, Cross-Dataset Generalization, LLaMA, Mistral, Qwen
TL;DR: We steer LLMs away from hallucinations by dynamically amplifying context-sensitive attention heads.
Abstract: Large language models (LLMs) often generate fluent but factually incorrect statements despite having access to relevant evidence, a failure mode rooted in how they allocate attention between contextual and parametric knowledge. Understanding and steering this internal behavior is key both for trustworthy deployment and for scientific interpretability of model mechanisms. We introduce COMPASS (Context-Modulated PID At- tention Steering System), a lightweight, interpretable control framework that embeds a model-based feedback loop directly within decoding. COMPASS quantifies context reliance via a transparent metric, the Context Reliance Score (CRS), which serves as an online probe of how attention heads ground generation in evidence. Using this interpretable signal, a PID controller dynam- ically modulates attention heads to maintain factual consistency without retraining or multi-pass decoding. Across benchmarks (HotpotQA, XSum, HaluEval, RAGTruth), COMPASS consistently reduces contextual hallucination rates (2.8–5.8% absolute) while revealing how distinct attention heads contribute to evidence alignment. These results highlight feedback-driven in- terpretability as a pathway toward scientific understand- ing of LLM behavior.
Submission Number: 15
Loading