Tail-Optimized Caching for LLM Inference

Published: 11 Jun 2025, Last Modified: 10 Jul 2025ES-FoMo IIIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: prompt caching, large language models, tail latency, KV‑cache eviction, Least‑Recently‑Used
Abstract: Prompt caching is critical for reducing latency and cost in LLM inference---OpenAI and Anthropic report up to 50–90\% cost savings through prompt reuse. Despite its widespread success, little is known about what constitutes an optimal prompt caching policy, particularly when optimizing tail latency—a metric of central importance to practitioners. The widely used Least Recently Used (LRU) policy can perform arbitrarily poor on this metric, as it is oblivious to the heterogeneity of conversation lengths. To address this gap, we propose Tail-Optimized LRU, a simple two-line modification that reallocates KV cache capacity to prioritize high-latency conversations by evicting cache entries unlikely to affect future turns. Though the implementation is simple, we prove its optimality under a natural stochastic model of conversation dynamics, providing the first theoretical justification for LRU in this setting---a result that may be of independent interest to the caching community. Experimentally, on real conversation data WildChat~\cite{zhao2024wildchat}, Tail-Optimized LRU achieves up to 27.5\% reduction in P90 tail Time to First Token latency and 23.9\% in P95 tail latency compared to LRU, along with up to 40\% decrease in SLO violations of 200ms. We believe this provides a practical and theoretically grounded option for practitioners seeking to optimize tail latency in real-world LLM deployments.
Submission Number: 115
Loading