Keywords: context compression, retrieval-augmented generation, attention probing, mechanistic interpretability, long-context processing
Abstract: Retrieval-augmented generation (RAG) often suffers from long and noisy retrieved contexts. Prior context compression methods rely on predefined importance metrics or supervised compression models, rather than on the model’s own inference-time behavior. We propose Sentinel, a lightweight sentence-level compression framework that treats context compression as an understanding decoding problem. Sentinel probes native attention behaviors of a frozen LLM with a lightweight readout to decode which parts of the context are actually utilized when answering a query, rather than using attention as a direct relevance score. We empirically observe that decoded relevance signals exhibit sufficient consistency across model scales to support effective compression with compact proxy models. On LongBench, Sentinel with a 0.5B proxy model achieves up to 5× compression while matching the QA performance of 7B-scale baselines, and despite being trained only on English QA data, generalizes effectively to Chinese and out-of-domain settings.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: multihop QA, open-domain QA, interpretability
Contribution Types: Model analysis & interpretability, Approaches low compute settings-efficiency
Languages Studied: English, Chinese
Submission Number: 8934
Loading