GAM: Hierarchical Graph Memory for LLM-based Agents

Published: 03 Mar 2026, Last Modified: 09 Mar 2026ICLR 2026 Workshop MemAgentsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, LLM Agents, Long-term Memory
TL;DR: We propose GAM to resolve the conflict between rapid perception and stable retention via state-based consolidation that decouples active buffering from archived history within a hierarchical graph memory.
Abstract: To sustain coherent long-term interactions, Large Language Model (LLM) agents must navigate the tension between acquiring new information and retaining prior knowledge. Current unified stream-based memory systems facilitate context updates but remain vulnerable to interference from transient noise. Conversely, discrete structured memory architectures provide robust knowledge retention but often struggle to adapt to fluid narrative evolution. To address this, we propose \textbf{\textsc{GAM}}, a hierarchical \textbf{G}raph-based \textbf{A}gentic \textbf{M}emory framework that explicitly decouples memory encoding from consolidation to effectively resolve the conflict between rapid context perception and stable knowledge retention. By isolating ongoing dialogue in a event progression graph and integrating it into a topic associative network only upon semantic shifts, our approach minimizes interference while preserving long-term consistency. Additionally, we introduce a Graph-guided, Multi-factor Retrieval strategy to enhance context precision. Experiments on LoCoMo and LongDialQA benchmarks indicate that our method consistently outperforms state-of-the-art baselines in both reasoning accuracy and efficiency.
Submission Number: 34
Loading