Attention-Gate: Adaptive In-Context KV-Cache Eviction in LLMs

ACL ARR 2025 May Submission4761 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The KV-Cache technique has become the standard for the inference of large language models (LLMs). This paper enables a novel dynamic KV-Cache eviction policy by injecting lightweight *Attention-Gates* (AGs) into the model to maximize the utilization efficiency of KV-Cache. AG accepts the *global* context as input and yields eviction flags for each token. The self-attention modules in the model proceed according to the flags and cache only a subset of the KV states for next token prediction. The Attention-Gates can yield various flags for different heads and layers and be easily tuned on top of a pre-trained LLM via continual pre-training or supervised fine-tuning. The computational and memory overhead introduced by Attention-Gates can be minimal. We conduct empirical evaluations across multiple scenarios, showing that our method significantly reduces redundant KV-Cache memory usage while maintaining competitive performance.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: LLM Efficiency
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: English
Submission Number: 4761
Loading