CommVQ: Commutative Vector Quantization for KV Cache Compression

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
TL;DR: We propose CommVQ, a novel KV cache quantization method that reduces FP16 KV cache size by 87.5% while maintaining high accuracy through additive quantization and a RoPE-commutative codebook.
Abstract: Large Language Models (LLMs) are increasingly used in applications requiring long context lengths, but the key-value (KV) cache often becomes a memory bottleneck on GPUs as context grows. To address this, we propose Commutative Vector Quantization (CommVQ) to significantly reduce memory usage for long-context LLM inference. We first introduce additive quantization with a lightweight encoder and codebook to compress the KV cache, which can be decoded via simple matrix multiplication. To further reduce computational costs during decoding, we design the codebook to be commutative with Rotary Position Embedding (RoPE) and train it using an Expectation-Maximization (EM) algorithm. This enables efficient integration of decoding into the self-attention mechanism. Our approach achieves high accuracy with additive quantization and low overhead via the RoPE-commutative codebook. Experiments on long-context benchmarks and GSM8K show that our method reduces FP16 KV cache size by 87.5% with 2-bit quantization, while outperforming state-of-the-art KV cache quantization methods. Notably, it enables 1-bit KV cache quantization with minimal accuracy loss, allowing a LLaMA-3.1 8B model to run with a 128K context length on a single RTX 4090 GPU. The source code is available at: https://github.com/UMass-Embodied-AGI/CommVQ.
Lay Summary: Large language models (LLMs), like those used in chatbots and document analysis, need to remember a lot of information as they read longer texts. This memory takes up a lot of space on our computers, making it hard to run these models efficiently. Our work introduces a new method, called CommVQ, that compresses this memory so it takes up much less space, without hurting the model’s performance. This allows large models to handle much longer texts on everyday computer hardware. Our approach makes these powerful models faster, cheaper, and more accessible for real-world use.
Link To Code: https://github.com/UMass-Embodied-AGI/CommVQ
Primary Area: Deep Learning->Large Language Models
Keywords: KV Cache, LLM
Submission Number: 1131
Loading