LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues

15 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Inference Acceleration, KV Cache, Long-context Multi-Turn Dialogues, Efficient LLMs
TL;DR: We propose LoopServe, a dual-phase system that accelerates multi-turn LLM dialogue inference by adaptively selecting key attention patterns and progressively compressing KV caches, enabling efficient and high-quality responses.
Abstract: Multi-turn dialogues are essential in many real-world applications of large language models (LLMs), such as chatbots and virtual assistants. As conversation histories become longer, existing LLMs face increasing computational and memory challenges, which hinder their ability to provide efficient and responsive interactions. Most current acceleration methods either compress the context or optimize key value caching, but they often rely on fixed or position-based heuristics that do not adapt well to the dynamic and unpredictable patterns found in actual multi-turn conversations. As a result, these models cannot accurately identify and prioritize the most relevant context, leading to degraded response quality. In this paper, we present LoopServe, an adaptive dual-phase inference acceleration framework for LLMs in multi-turn dialogues. LoopServe introduces two main innovations. First, it performs online sparsification during the prefilling phase by dynamically selecting the most important parts of the attention matrix for each new input. Second, it uses progressive KV compression during decoding by adaptively maintaining a relevant and efficient cache based on the most recently generated output tokens. We also propose a new benchmark with eleven multi-turn datasets that reflect realistic question positions and conversational dependencies. Extensive experiments demonstrate that LoopServe consistently achieves superior effectiveness compared to existing baselines and significantly accelerates LLM inference across a wide range of long-context dialogue tasks.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 5636
Loading