Knowledge Graph Enhanced Memory-Augmented Retrieval for Long Context Modeling

ACL ARR 2026 January Submission2296 Authors

02 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge Graph, Memory-Augmented Retrieval, Long Context Modeling
Abstract: Memory-augmented retrieval systems excel at semantic matching but fail to capture structural relationships between entities in long-form text—relationships critical for knowledge-intensive applications. KG-ERMAR addresses this limitation by constructing dynamic, context-specific knowledge graphs from input text during inference, enabling domain-adaptive retrieval that leverages both semantic similarity and explicit entity relationships. The framework performs real-time entity and relation extraction to build contextual knowledge graphs, then integrates graph-structural embeddings with textual semantics through a specialized multi-component memory architecture. Three memory banks—contextual, semantic, and structural—are maintained with retrieval signals fused via learned weights to capture both surface-level semantics and deeper relational patterns. Evaluated on SlimPajama (84.7K training examples), WikiText-103 (4,358 examples), PG-19 (100 examples), and Proof-Pile (46.3K examples), KG-ERMAR achieves up to 8.5\% lower perplexity and 2--2.5$\times$ better memory efficiency than strong baselines across context lengths from 1K to 32K tokens. The dynamic knowledge graph construction approach advances memory-augmented language modeling by enabling domain-specific knowledge representation that adapts to input contexts rather than relying on fixed knowledge bases.
Paper Type: Long
Research Area: Retrieval-Augmented Language Models
Research Area Keywords: retrieval-augmented models, dense retrieval, re-ranking, memory augmented retrieval, knowledge graphs
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2296
Loading