Where Matters More Than What: Decoding-aligned KV Cache Compression via Position-aware Pseudo-queries
Keywords: LLM, KV cache compression, Long-context
Abstract: The Key-Value (KV) cache is crucial for efficient Large Language Models (LLMs) inference, but excessively long contexts drastically increase KV cache memory footprint. Existing KV cache compression methods typically rely on input-side attention patterns within a prompt observation window to estimate token importance during the prefill stage. They fail to preserve critical tokens for future generation since these assessments are not derived from the decoding process. Intuitively, an effective observation window should mirror the decoding-stage queries to accurately reflect which tokens the generation process will attend to. However, ground-truth decoding queries are inherently unavailable during inference. For constructing pseudo-queries to approximate them, we find that positional information plays a more critical role than semantic content. Motivated by this insight, we propose decoding-aligned KV cache compression via position-aware pseudo-queries (DapQ), a novel and lightweight eviction framework that leverages position-aware pseudo-queries to simulate the output tokens, thereby establishing an effective observation window for importance assessment. It enables precise token eviction that aligns closely with the actual generation context. Extensive evaluation across multiple benchmarks and LLMs demonstrates that DapQ achieves superior performance, particularly under strict memory constraints (e.g., up to nearly lossless performance 99.5\% on NIAH with 3\% KV cache budgets).
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16503
Loading