MemoGraph: Augmenting LLMs with Explicit Episodic Memory for Multi-step Mathematical Reasoning

Published: 03 Mar 2026, Last Modified: 25 Apr 2026ICLR 2026 Workshop MemAgentsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, mathematical reasoning, neuro-symbolic methods, graph neural networks, retrieval-augmented generation, episodic memory
TL;DR: We introduce MemoGraph, a graph-based episodic memory layer that uses GNN-guided theorem retrieval and automatic verification to boost LLM multi-step math reasoning accuracy and robustness.
Abstract: Large Language Models (LLMs) fundamentally struggle with complex mathematical reasoning due to the volatility of their implicit parametric memory, which leads to context drift and hallucination. Existing paradigms, relying on linear generation or static retrieval, fail to maintain a precise, persistent record of the evolving proof state. To address this, we propose \textbf{MemoGraph}, a neuro-symbolic framework that augments LLMs with an explicit episodic memory layer. We formulate reasoning as the dynamic maintenance of a heterogeneous graph, enabling state-aware reading that utilizes graph-structural encoding to retrieve relevant principles from a verified semantic memory. Furthermore, we introduce a write-gating verification module to intercept invalid deductions before they are consolidated into the reasoning context. Empirical evaluations across multiple benchmarks demonstrate that MemoGraph significantly outperforms strong baselines in both accuracy and robustness by ensuring memory integrity, establishing a scalable paradigm for trustworthy reasoning agents.
Submission Number: 30
Loading