Remember Me, Refine Me: A Dynamic Procedural Memory Framework for Experience-Driven Agent Evolution

ACL ARR 2026 January Submission4703 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Agent Memory System, Procedural Memory Management, Agent Evolution
Abstract: Procedural memory enables large language model (LLM) agents to internalize ''how-to'' knowledge and thus reduce redundant trial-and-error. However, existing frameworks predominantly suffer from a ''passive accumulation'' paradigm, treating memory as a static append-only archive. To bridge the gap between static storage and dynamic reasoning, we propose ReMe (Remember Me, Refine Me), a comprehensive framework for experience-driven agent evolution. ReMe manages the memory lifecycle via three mechanisms: 1) multi-faceted distillation, which extracts fine-grained experiences by recognizing success patterns, analyzing failure triggers and generating comparative insights; 2) context-adaptive reuse, which tailors historical insights to new contexts through scenario-aware indexing; and 3) utility-based refinement, which automatically adds validated memories and prunes outdated ones to maintain a compact, high-quality experience pool. Experiments on BFCL-V3 and AppWorld demonstrate that ReMe establishes a new state-of-the-art in agent memory system. Crucially, we observe a significant memory-scaling effect: Qwen3-8B equipped with ReMe outperforms larger, memoryless Qwen3-14B, indicating that self-evolving memory provides a computation-efficient path for lifelong learning.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: memory system, procedural memory, agent self-evolution
Contribution Types: NLP engineering experiment, Reproduction study, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 4703
Loading