KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing

ACL ARR 2025 February Submission6327 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The development of large language models (LLMs) has significantly expanded model sizes, resulting in substantial GPU memory requirements during inference. The key and value storage of the attention map in the KV (key-value) cache accounts for more than 80% of this memory consumption. Nowadays, most existing KV cache compression methods focus on intra-layer compression within a single Transformer layer but few works consider layer-wise compression. In this paper, we propose a plug-and-play method called $\textit{KVSharer}$, which shares the KV cache between layers to achieve layer-wise compression. Rather than intuitively sharing based on higher similarity, we discover a counterintuitive phenomenon: sharing dissimilar KV caches better preserves the model performance. Experiments show that $\textit{KVSharer}$ can reduce KV cache computation by 30%, thereby lowering memory consumption without significantly impacting model performance and it can also achieve at least 1.3 times generation acceleration. Additionally, we verify that $\textit{KVSharer}$ is compatible with existing intra-layer KV cache compression methods, and combining both can further save memory.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: KV Cache, Large Language Model, Layer-wise sharing
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English, Chinese
Submission Number: 6327
Loading