Keywords: KV Cache, Prefix Sharing, LRU, Large Language Models, LLM Routing, KV Cache Eviction, Multi-LLM Serving
TL;DR: We present the first unified model of KV cache eviction and query routing, and propose algorithms that combine provably competitive randomized eviction with learning-based routing to significantly boost inference efficiency and reduce latency.
Abstract: KV caching is a fundamental technique for accelerating Large Language Model (LLM) inference by reusing key-value (KV) pairs from previous queries, but its effectiveness under limited memory is highly sensitive to the eviction policy.
The default Least Recently Used (LRU) eviction algorithm struggles with dynamic online query arrivals, especially in multi-LLM serving scenarios, where balancing query load across workers and maximizing cache hit rate of each worker are inherently conflicting objectives.
We give the first unified mathematical model that captures the core trade-offs between KV cache eviction and query routing.
Our analysis reveals the theoretical limitations of existing methods and leads to principled algorithms that integrate provably competitive randomized KV cache eviction with learning-based methods to adaptively route queries with evolving patterns, thus balancing query load and cache hit rate.
Our theoretical results are validated by extensive experiments across 4 benchmarks and 3 prefix-sharing settings, demonstrating improvements of up to **6.92$\times$** in cache hit rate, **11.96$\times$** reduction in latency, **14.06$\times$** reduction in time-to-first-token (TTFT), and **77.4%** increase in throughput over the state-of-the-art methods.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 13606
Loading