Keywords: token eviction, LLMs, attention output, constraint KV cache generation
TL;DR: CAOTE improves token eviction in LLMs by optimizing for eviction error (combining attention score and value vectors). It reduced memory/computation costs and boosts accuracy across models, especially on resource-limited devices.
Abstract: While long context support of large language models has extended their abilities, it also incurs challenges in memory and compute which becomes crucial bottlenecks in resource-restricted devices.
Token eviction, a widely adopted post-training methodology designed to alleviate the bottlenecks by evicting less important tokens from the cache, typically uses attention scores as proxy metrics for token importance.
However, one major limitation of attention score as a token-wise importance metrics is that it lacks the information about contribution of tokens to the attention output.
In this paper, we propose a simple eviction criterion based on the contribution of cached tokens to attention outputs. Our method, CAOTE, optimizes for error due to token eviction, by seamlessly integrating attention scores and value vectors. This is the first method to use information from the value vector on top of attention-based eviction scores. Additionally, CAOTE can act as a meta-heuristic method with flexible usage with any token eviction method.
We show that CAOTE, when combined with state-of-the-art attention score-based methods, always improves accuracies on the downstream task for $L{\small LAMA}$3 and $Q{\small WEN}$2.5 model families, indicating the importance of leveraging information from values during token eviction process.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 17779
Loading