Keywords: Agent memory, CRUD
Abstract: Equipping agents with memory is essential for solving real-world long-horizon problems. However, most existing memory mechanisms rely on static and hand-crafted workflows. This limits the performance and generalization ability of these memory designs—highlighting the need for a more flexible, learning-based memory framework. In this paper, we reframe memory management as a dynamic decision-making problem. We deconstruct high-level memory processes into \textbf{fundamental atomic CRUD} (Create, Read, Update, Delete) operations, transforming the memory workflow into a learnable action space. By combining supervised fine-tuning (SFT) with reinforcement learning (RL), our agent, AtomMem, learns an autonomous, task-aligned policy to orchestrate memory behaviors tailored to specific task demands. Experimental results across multiple benchmarks demonstrate that \ourwork consistently outperforms prior static-workflow methods. Analysis of the training dynamics reveals that RL effectively shifts the agent from unstructured memory usage to a strategy that prioritizes high-quality memory maintenance, significantly raising the performance upper bound for complex, long-context reasoning.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: agent memory, reinforcement learning in agents
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 6510
Loading