Towards Efficient Large Language Model Serving: A Survey on System-Aware KV Cache Optimization

ACL ARR 2026 January Submission1465 Authors

30 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM serving, KV cache optimization, LLM efficiency, system behavior
Abstract: Despite the rapid advancements of large language models (LLMs), LLM serving systems remain memory-intensive and costly. The key-value (KV) cache, which stores KV tensors during autoregressive decoding, is crucial for enabling low-latency, high-throughput LLM inference serving. In this survey, we focus on system-aware KV infrastructure for serving LLMs (abbreviated as sKis). We revisit recent work from a system behavior perspective, organizing existing efforts into three dimensions: execution and scheduling (temporal), placement and migration (spatial), and representation and retention (structural). Furthermore, we analyze cross-behavior co-design affinity and behavior-objective links, highlighting future opportunities. Our work systematizes a rapidly evolving area, providing a foundation for understanding and innovating KV cache designs in modern LLM serving infrastructure.
Paper Type: Long
Research Area: LLM Efficiency
Research Area Keywords: LLM Efficiency, NLP in resource-constrained settings, quantization
Contribution Types: Surveys
Languages Studied: Language-agnostic
Submission Number: 1465
Loading