APCE: Adaptive Progressive Context Expansion for Long Context Processing

NeurIPS 2025 Workshop MLForSys Submission54 Authors

Published: 30 Oct 2025, Last Modified: 12 Nov 2025MLForSys2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Attention Models, Efficient Inference Methods, Generative Models, Hardware and Systems, Natural Language Processing, Optimization for Deep Networks
TL;DR: Input sparsification method based on semantic similarity to improve summarization performance
Abstract: Deploying useful Long-Context Transformer Models (LCTMs) requires addressing two key challenges: (1) A growing memory footprint due to quadratic self-attention and linear KV-cache scaling in memory as sequence length increase; (2) the ContextRot phenomena where empirical evidence suggests that transformer architecture’s performance degrades with increasing context length. Given the shared dependency on the input, a natural question arises: "Can we surgically select the most important input chunks for processing to synergistically (a) reduce the memory footprint, and (b) mitigate the ContextRot effects?" In this paper, we answer this question in the affirmative for long-context summarization tasks. We propose APCE as a context-aware solution to select the most important input chunks through low-dimensional semantic similarity matching with the current query. By directly operating on the input, APCE decouples from strict dependency on underlying hardware or CUDA environments, promising a compatible solution scalable to different deployment systems. Our empirical evaluations have demonstrated superior or on-par summarization performance for APCE compared to the full dense baseline using a fraction (50%-70%) of the input sequence resulting in KV-cache and self-attention memory efficiency improvements. We hope our findings inspire further research on context-aware efficiency solutions for LCTMs geared towards other relevant long-context tasks.
Submission Number: 54
Loading