Autoencoding-Free Context Compression for LLMs via Contextual Semantic Anchors

ICLR 2026 Conference Submission17241 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Context Compression, Long Context LLMs, LLM Memory
TL;DR: SAC achieves efficient context compression without autoencoder training by using contextual semantic anchors and bidirectional attention.
Abstract: Context compression presents a promising approach for accelerating large language model (LLM) inference by compressing long contexts into compact representations.Current context compression methods predominantly rely on autoencoding tasks to train context-agnostic compression tokens to compress contextual semantics.While autoencoding tasks enable compression tokens to acquire compression capabilities, compression via autoencoding tasks creates a fundamental mismatch: the models are optimized for reconstruction that diverge from actual downstream tasks, thereby weakening the features more beneficial for real-world usage.We propose Semantic-Anchor Compression (SAC), a novel method that shifts from autoencoding task based compression to an architecture that is equipped with this compression capability $\textit{a priori}$. Instead of training models to compress contexts through autoencoding tasks, SAC directly selects so-called anchor tokens from the original context and aggregates contextual information into their key-value (KV) representations. By deriving representations directly from the contextual tokens, SAC eliminates the need for autoencoding training. To ensure compression performance while directly leveraging anchor tokens directly, SAC incorporates two key designs: (1) anchor embeddings that enable the compressor to identify critical tokens, and (2) bidirectional attention modification that allows anchor tokens to capture information from the entire context.Experimental results demonstrate that SAC consistently outperforms existing context compression methods across various compression ratios. On out-of-distribution evaluation using MRQA, SAC achieves 1 EM improvement at 5x compression over strong baselines,with increasing advantages at higher compression ratios.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 17241
Loading