Keywords: Large Language Models, Long-term Memory, Cognitive Memory Architecture
Abstract: Despite the rapid progress of large language models (LLMs) enabling agents to perform complex decision-making and interaction, their limited long-term memory capacity hinders effective retention and organization of historical interactions. This often leads to instability and semantic fragmentation in multi-turn dialogues and long-range reasoning tasks. Existing memory mechanisms struggle with structural reorganization, dynamic semantic retrieval, and the modeling of cognitive phenomena such as memory consolidation and forgetting. To address these challenges, we propose MemoryField, a novel dynamic spatial cognitive memory architecture driven by an attention-based gravitational field model. MemoryField represents memory items as nodes in a high-dimensional semantic space, where semantic attraction, repulsion, attention-driven forces, and decay mechanisms enable self-organized evolution and adaptive restructuring. By integrating node dynamics with fusion and forgetting processes, our approach ensures semantic coherence and cognitive stability. We validate the effectiveness of our approach on multi-turn dialogue and multi-type reasoning tasks. In dialogue tasks, MemoryField outperforms baseline models, achieving improvements of up to 4.9 points in Mauve and 3.3 points in ROUGE-L. In long-context reasoning tasks, the F1 score is improved by up to 14.7 points on adversarial and temporal reasoning benchmarks. These results demonstrate that the proposed method offers significant advantages in memory modeling and can serve as a general solution for long-term memory management in LLM agents.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 17763
Loading