Keywords: Large Language Models; Long Context; Efficient Inference
Abstract: Large language models (LLMs) with extended context windows enable powerful applications but impose significant memory overhead, as caching all key–value (KV) states grows linearly with sequence length and batch size. Existing cache eviction methods address this by exploiting attention sparsity, yet they typically rank tokens heuristically using accumulated attention weights without considering their true impact on attention outputs. We propose Optimal Brain Cache (OBCache), a principled framework that formulates cache eviction as a layer-wise structured pruning problem. Building on Optimal Brain Damage (OBD) theory, OBCache quantifies token saliency by measuring the perturbation on attention outputs induced by pruning tokens, with closed-form scores derived for isolated keys, isolated values, and joint key–value pairs. Our scores account not only for attention weights but also for information from value states and attention outputs, thereby enhancing existing eviction strategies with output-aware signals. Experiments on LLaMA and Qwen models show that replacing the heuristic scores in existing works, which estimate token saliency across different query positions, with OBCache's output-aware scores consistently improves long-context accuracy.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 1117
Loading