Abstract: Processing long contexts remains a challenge
for large language models (LLMs) due to the
quadratic computational and memory overhead of the self-attention mechanism and the
substantial KV cache sizes during generation.
We propose LLoCO, a novel approach to address this problem by learning contexts offline
through context compression and in-domain
parameter-efficient finetuning with LoRA. Our
method enables an LLM to create a concise
representation of the original context and efficiently retrieve relevant information to answer
questions accurately. Our approach extends
the effective context window of a 4k token
LLaMA2-7B model to handle up to 128k tokens. We evaluate our approach on several long-
context question-answering datasets, demonstrating that LLoCO significantly outperforms
in-context learning while using 30× fewer tokens during inference. LLoCO achieves up to
7.62× speed-up during inference and 11.52×
higher throughput during finetuning, substantially reduces the cost of long document question answering. This makes it a promising solution for efficient long context processing.
Loading