Truth-Maintained Memory Agent: Proactive Quality Control for Reliable Long-Context Dialogue

Published: 08 Nov 2025, Last Modified: 28 Nov 2025ResponsibleFM @ NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: false memory, long-context dialogue, multi-agent systems, memory management, retrieval-augmented generation, truth verification, dialogue systems, language models
Abstract: Large Language Models (LLMs) are prone to false memory formation during long, multi-turn interactions, incorporating incorrect, irrelevant, or contradictory information. Traditional methods such as enlarging context windows, summarizing memory, or selective retrieval, are often computationally expensive and reactive, which allows errors to accumulate. We propose the \textit{Truth-Maintained Memory Agent (TMMA)}, a proactive multi-agent framework that enforces write-time quality control. In the TMMA system, input context undergoes token-gating, complexity evaluation, and truth-verification through a four-tier hierarchical system consisting of Working Memory, Summarized Memory, Archival Memory, and a Flagged Bin for contested content. This structure balances context specificity with long-term retention, reduces the accumulation of noise, and preserves the coherence of the LLM more efficiently than simply expanding the context. Our research indicates that TMMA significantly reduces the incidence of false memories and enhances response quality on existing benchmarks. It offers a pathway to scalable and reliable long-context management in LLMs.
Submission Number: 141
Loading