A$^2$ATS: Retrieval-Based KV Cache Reduction via Windowed Rotary Position Embedding and Query-Aware Vector Quantization
Abstract: Long context large language models (LLMs) pose significant challenges for efficient serving due to the large memory footprint and high access overhead of KV cache.
Retrieval-based KV cache reduction methods can mitigate these challenges, typically by offloading the complete KV cache to CPU and retrieving necessary tokens on demand during inference.
However, these methods still suffer from unsatisfactory accuracy degradation and extra retrieval overhead.
To address these limitations, this paper proposes A$^2$ATS, a novel retrieval-based KV cache reduction method.
A$^2$ATS aims to obtain an accurate approximation of attention scores by applying the vector quantization technique to key states, thereby enabling efficient and precise retrieval of the top-K tokens.
First,
we propose Windowed Rotary Position Embedding, which decouples the positional dependency from query and key states after position embedding.
Then,
we propose query-aware vector quantization that optimizes the objective of attention score approximation directly.
Finally,
we design the heterogeneous inference architecture for KV cache offloading, enabling long context serving with larger batch sizes.
Experimental results demonstrate that A$^2$ATS can achieve a lower performance degradation with similar or lower overhead compared to existing methods, thereby increasing long context serving throughput by up to $2.7 \times$.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: NLP in resource-constrained settings
Languages Studied: N/A
Submission Number: 1684
Loading