From Retrieval to Reconstruction: Constructing Evolvable Cognitive Memory for Long-term Dialogue

ACL ARR 2026 January Submission2966 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Long-term Memory, Retrieval-Augmented Generation, Cognitive Architectures, Autonomous Agents, Knowledge Graphs, Memory Consolidation, Dialogue Systems
Abstract: Large Language Models (LLMs) are evolving into long-term personal companions, necessitating memory systems that go beyond simple text retrieval. However, existing Retrieval-Augmented Generation (RAG) frameworks typically treat memory as a flat, passive repository. This leads to semantic isolation, where the temporal links between events and the logical connections between entities are lost, hindering complex reasoning in multi-turn dialogues. In this paper, we introduce \textbf{CogMem}, a cognitive graph architecture designed to reconstruct long-term context fidelity. CogMem proposes a human-centric \textbf{PEC$^2$F (Person-Event-Concept-Claim-Fact)} schema that structurally organizes dialogue into interconnected episodic traces and semantic knowledge. By explicitly integrating \textbf{Claim nodes} within this framework, CogMem ensures epistemic clarity, distinguishing subjective attributions from objective records. Shifting from static retrieval pipelines to agentic active recall, we further develop a Cognitive Search Agent that dynamically navigates this graph using atomic operators (e.g., intersection, temporal scanning). Experiments on the LoCoMo and LongMemEval benchmarks demonstrate that CogMem significantly outperforms state-of-the-art baselines in multi-hop reasoning and temporal consistency, validating the necessity of structural reconstruction over passive vector matching.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Dialogue and Interactive Systems, Information Retrieval and Text Mining, Question Answering, Language Modeling and Analysis of Language Models
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2966
Loading