Gradual Forgetting: Logarithmic Compression for Extending Transformer Context Windows

Published: 23 Sept 2025, Last Modified: 17 Feb 2026CogInterp @ NeurIPS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Scale-Invariant Memory, Logarithmic Compression, Language Modeling, Transformers, Long Context Modeling
TL;DR: We propose enhancing transformers with scale-invariant, logarithmically compressed memory inspired by cognitive science, demonstrating improved language modeling performance and efficient capture of long-range dependencies.
Abstract: Most approaches to long-context processing increase the complexity of the transformer's internal architecture by integrating mechanisms such as recurrence or auxiliary memory modules. In this work, we introduce an alternative approach that modifies the input representation itself, rather than the transformer architecture. Inspired by cognitive models of human memory, our method applies a scale-invariant logarithmic compression to the input tokens. The resulting compressed representation is processed by a standard, unmodified transformer, preserving architectural simplicity. We evaluate this approach on the WikiText-103 and PG-19 language modeling benchmarks, showing a reduction in perplexity compared to uncompressed baselines. Moreover, performance improves consistently with longer compressed temporal contexts, showing that input‑level logarithmic compression is a simple and effective way to extend a transformer's long‑range memory.
Submission Number: 85
Loading