ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: High-Throughput Long-Context LLM Inference System
Abstract: With the widespread deployment of long-context large language models (LLMs), there has been a growing demand for efficient support of high-throughput inference. However, as the key-value (KV) cache expands with the sequence length, the increasing memory footprint and the need to access it for decoding both result in low throughput when serving long-context LLMs. While various dynamic sparse attention methods have been proposed to accelerate inference while maintaining generation quality, they either fail to sufficiently reduce GPU memory usage or introduce significant decoding latency by offloading the KV cache to the CPU. We present ShadowKV, a high-throughput long-context LLM inference system that stores the low-rank key cache and offloads the value cache to reduce the memory footprint for larger batch sizes and longer sequences. To minimize decoding latency, ShadowKV employs an accurate KV selection strategy that reconstructs minimal sparse KV pairs on-the-fly. By evaluating ShadowKV on benchmarks like RULER, LongBench, and models such as Llama-3.1-8B and GLM-4-9B-1M, we demonstrate that it achieves up to 6$\times$ larger batch sizes and 3.04$\times$ higher throughput on an A100 GPU without sacrificing accuracy, even surpassing the performance achievable with infinite batch size under the assumption of infinite GPU memory.
Lay Summary: Large language models that can understand very long texts require significant computer memory, which makes them slow and expensive to use, especially when many people are using them at once. Our research introduces ShadowKV, a new system designed to significantly speed up these models. ShadowKV works by smartly managing the model's memory: it keeps a small, compressed version of the most important data on the GPU and moves the less critical data to CPU. When the model needs information, ShadowKV quickly finds and retrieves only what's essential, avoiding delays and ensuring the model remains accurate. This allows us to handle many more requests and much longer texts simultaneously, making powerful long-context AI more efficient and accessible for broader use.
Link To Code: https://github.com/ByteDance-Seed/ShadowKV
Primary Area: Deep Learning->Large Language Models
Keywords: Long-Context LLMs, Inference, KV Cache Optimization
Submission Number: 6997
Loading