M+: Extending MemoryLLM with Scalable Long-Term Memory

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Equipping a long-term memory on MemoryLLM
Abstract: Equipping large language models (LLMs) with latent-space memory has attracted increasing attention as they can extend the context window of existing language models. However, retaining information from the distant past remains a challenge. For example, MemoryLLM (Wang et al., 2024a), as a representative work with latent-space memory, compresses past information into hidden states across all layers, forming a memory pool of 1B parameters. While effective for sequence lengths up to 16k tokens, it struggles to retain knowledge beyond 20k tokens. In this work, we address this limitation by introducing M+, a memory-augmented model based on MemoryLLM that significantly enhances long-term information retention. M+ integrates a long-term memory mechanism with a co-trained retriever, dynamically retrieving relevant information during text generation. We evaluate M+ on diverse benchmarks, including long-context understanding and knowledge retention tasks. Experimental results show that M+ significantly outperforms MemoryLLM and recent strong baselines, extending knowledge retention from under 20k to over 160k tokens with similar GPU memory overhead.
Lay Summary: Large language models (like ChatGPT) can normally “remember” only what fits in their immediate reading window—roughly a few thousand words. When conversations or documents get longer, older parts fall out of view and the model starts to forget. Researchers have tried to fix this with latent-space memory—special chunks of internal data that act like sticky notes for past information. A recent system called MemoryLLM was one of the first big steps: it compresses earlier text into a huge internal memory so the model can recall up to about 20 000 tokens (roughly a short novel). Our new model, M+, pushes that memory horizon far further. We add: 1. Long-term memory slots that keep important facts around permanently. 2. A built-in “retriever” that knows how to fetch the right memory at the right moment while the model is writing. Together these upgrades let M+ keep track of more than 160 000 tokens—eight times farther than before—without using extra GPU space. In tests that probe long-document understanding and fact recall, M+ consistently beats MemoryLLM and other state-of-the-art methods, showing it can hold on to distant details while remaining just as efficient to run.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/wangyu-ustc/MemoryLLM
Primary Area: Deep Learning->Foundation Models
Keywords: memory, long-term memory, long context
Submission Number: 1091
Loading