Gated Differentiable Working Memory for Long-Context Language Modeling

ACL ARR 2026 January Submission3430 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Memory, Long-context, Test-time Adaption, Context Engineering
Abstract: Long contexts break transformers: attention scores dilute across thousands of tokens, critical information gets lost in the middle, and the model cannot adapt to novel patterns at inference time. Recent work on test-time adaptation addresses this by maintaining a form of working memory---transient parameters updated on the current context---but existing approaches employ uniform write policies that waste computation on low-value regions and suffer from high gradient variance across semantically heterogeneous contexts. In this work, we reframe test-time adaptation as a budget-constrained memory consolidation problem, asking: given limited computational budget, which parts of the context should be consolidated into working memory? We propose GDWM ( $\textbf{G}ated$ $\textbf{D}ifferentiable$ $\textbf{W}orking$ $\textbf{M}emory$), a framework that introduces a Write Controller to gate the memory consolidation process. Our controller estimates Contextual Utility---an information-theoretic measure quantifying how much each region depends on long-range context---and allocates gradient steps accordingly, subject to a coverage constraint that ensures global representation. Theoretically, we prove that our chunk-restricted sampling strategy reduces gradient variance by eliminating inter-chunk variance via the Law of Total Variance. Experiments on ZeroSCROLLS and LongBench v2 benchmarks demonstrate that GDWM achieves comparable or superior performance with 4$\times$ fewer gradient steps compared to uniform baselines---excelling on sparse-information tasks (+6--13\% on Qasper, +5--13\% on GovReport for smaller models) while revealing principled trade-offs on dense-coverage tasks, establishing a new efficiency-performance Pareto frontier for test-time adaptation.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: LLM/AI agents
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3430
Loading