LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference

ACL ARR 2024 June Submission3080 Authors

15 Jun 2024 (modified: 04 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Long-context Multimodal Large Language Models (MLLMs) demand substantial computational resources for inference as the growth of their multimodal Key-Value (KV) cache, in response to increasing input lengths, challenges memory and time efficiency. Unlike single-modality LLMs that manage only textual contexts, the KV cache of long-context MLLMs includes representations from multiple images with temporal and spatial relationships and related textual contexts. The predominance of image tokens means traditional optimizations for LLMs' KV caches are unsuitable for multimodal long-context settings, and no prior works have addressed this challenge. In this work, we introduce \textbf{\textsc{LOOK-M}}, a pioneering, fine-tuning-free approach that efficiently reduces the multimodal KV cache size while maintaining performance comparable to a full cache. We observe that during prompt prefill, the model prioritizes more textual attention over image features, and based on the multimodal interaction observation, a new proposed text-prior method is explored to compress the KV cache. Furthermore, to mitigate the degradation of image contextual information, we propose several compensatory strategies using KV pairs merging. \textbf{\textsc{LOOK-M}}\footnote{The source code will be made publicly available.} demonstrates that with a significant reduction in KV Cache memory usage, such as reducing it by \textbf{80\%} in some cases, it not only achieves approximately \textbf{1.3x} faster decoding but also maintains or even \textbf{enhances} performance across a variety of long context multimodal tasks.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: multimodal KV cache optimization, multimodal large language model, efficient ML
Contribution Types: NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English
Submission Number: 3080
Loading