Core Context Aware Transformers for Long Context Language Modeling

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose Core Context Aware Attention that greatly reduces the context redundancy with strong performance on long context modeling.
Abstract: Transformer-based Large Language Models (LLMs) have exhibited remarkable success in extensive tasks primarily attributed to self-attention mechanism, which requires a token to consider all preceding tokens as its context to compute attention. However, when the context length L becomes very large (e.g., 128K), the amount of potentially redundant information in the context tends to increase. The redundant context not only hampers the modeling representation performance but also incurs unnecessary computational and storage overhead. In this paper, we propose a plug-and-play Core Context Aware (CCA) Attention for efficient long-context modeling, comprising two complementary modules: 1) Globality-aware pooling module groups input tokens and dynamically compresses each group into one core token based on their significance. In this way, our method automatically focuses and strengthens core context while diminishing redundancy during the learning process, leading to effective long-term dependency modeling. 2) Locality-preserving module incorporates neighboring tokens to preserve local context for detailed representation. Notably, our CCA-Attention is able to replace the self-attention module in existing LLMs with minimal fine-tuning cost. Extensive experimental results show the superiority of our method in both long-context modeling and computational efficiency over state-of-the-art methods.
Lay Summary: Transformer-based large language models (LLMs) have achieved great success in many tasks, thanks to a mechanism called self-attention. This mechanism allows a model to consider all previous words (or tokens) as context when processing new information. However, when the context becomes very long—such as 128,000 words—the model often encounters redundant information. This redundancy not only slows down computation but also reduces the model's ability to represent important details effectively. To solve this problem, we developed a plug-and-play solution called **Core Context Aware (CCA) Attention**, which makes long-context modeling more efficient. Our approach has two key components: 1) A **globality-aware pooling module** that compresses less important parts of the context into "core tokens," helping the model focus on the most meaningful information and reduce redundancy. 2) A **locality-preserving module** that retains nearby words to ensure detailed representation of local context. Importantly, our CCA-Attention can replace the self-attention module in existing LLMs with minimal fine-tuning. Experiments show that our method outperforms state-of-the-art techniques in both handling long contexts and improving computational efficiency.
Link To Code: https://github.com/chenyaofo/CCA-Attention
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models; Efficient Attention
Submission Number: 692
Loading