InfoBlend: Storing and Reusing KV Caches of Multimodal Information without Positional Restriction

ICLR 2026 Conference Submission25243 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal Large Language Model, AI System, Position-Independent Caching
TL;DR: The KV cache can be reused without positional restriction, through partial recomputation.
Abstract: The context caching technique is employed to accelerate the Multimodal Large Language Model (MLLM) inference by prevailing serving platforms currently. However, this approach merely reuses the Key-Value (KV) cache of the initial sequence of prompt, resulting in full KV cache recomputation even if the prefix differs slightly. This becomes particularly inefficient in the context of interleaved text and images, as well as multimodal retrieval-augmented generation. This paper proposes position-independent caching as a more effective approach for multimodal information management. We have designed and implemented a caching system, named InfoBlend, to address both system-level and algorithm-level challenges. InfoBlend stores the KV cache on local disks when receiving multimodal data, and calculates and loads the KV cache in parallel during inference. To mitigate accuracy degradation, we have incorporated the integrated reuse and recompute mechanism within the system. The experimental results demonstrate that InfoBlend can achieve up to 54\% reduction in response time and 2$\times$ improvement in throughput compared to existing context caching systems, while maintaining negligible or no accuracy loss.
Primary Area: infrastructure, software libraries, hardware, systems, etc.
Supplementary Material: zip
Submission Number: 25243
Loading