When Efficiency Meets Safety: A Benchmark Security Analysis of KV Cache Compression in Large Language Models
Keywords: Jailbreak Attack, KV Cache Compression, Large Language Model
Abstract: Key-Value (KV) caching is widely used in large language models (LLMs) to enable long-context inference efficiently, yet its security implications remain underexplored. We present the first systematic study of how KV cache compression interacts with jailbreak attacks, evaluating four model families under diverse jailbreak attacks. We identify a double-edged effect: (i) on one hand, compression can induce \textbf{Accidental Robustness}, where optimization-based and encoding-based attacks fail due to Malicious Semantic Eviction, where attacks' own attention redirection reduces the malicious query's cache importance, and Gradient Mismatch where discrete compression operations break jailbreak optimization. (ii) On the other hand, \textbf{Vulnerability Paradox} arises under merging-based compression for human-designed Attacks, where aggressive merging in shallow layers triggers functional head collapse, amplifying attack success rates. To address this, we propose \textbf{Safe-CAM}, a history-aware, per-head feedback merging strategy that prevents safety degradation while maintaining efficiency. Experiments show Safe-CAM fully restores safety (0\% ASR) and improves benign task performance with minimal overhead. Our study highlights that KV cache compression is not only an efficiency mechanism but also a safety-critical design factor in LLM deployment.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Jailbreak Attack, KV Cache Compression, Large Language Model
Contribution Types: Model analysis & interpretability, Reproduction study
Languages Studied: Chinese, English
Submission Number: 10374
Loading