A Lightweight, Domain-Adaptive Memory System for LLM Agents

Published: 03 Mar 2026, Last Modified: 25 Apr 2026ICLR 2026 Workshop MemAgents withconditionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: agent memory, retrieval-augmented generation, memory-augmented language models, long-context reasoning
TL;DR: A long-term agent-memory framework that extracts, consolidates, and retrieves information to improve long-context reasoning and adapt easily across domains.
Abstract: Long-term memory helps LLM agents solve tasks that require reasoning over long interaction histories. Recent agentic memory systems can outperform providing the full context window or standard retrieval over text chunks, but they often rely on heavy, task-specific context engineering and complex memory pipelines, making them hard to understand, deploy, and transfer to new domains. We introduce LightMem, a lightweight, domain-adaptive memory system that keeps only three core steps: extraction, consolidation, and retrieval. LightMem removes purpose-unclear components and clearly separates what needs human input from what can be automated: users specify only memory metadata and consolidation rules, while the rest of the pipeline is general. During consolidation, LightMem automatically builds a hierarchical memory tree via non-parametric agglomerative clustering, reducing manual design and avoiding task-specific tuning. During retrieval, LightMem traverses this tree to retrieve information across clusters and multiple granularities, enabling structured access to relevant memories. We evaluate LightMem on two unrelated tasks, personalization and code understanding, and show that it substantially improves accuracy over vanilla RAG and prior agentic memory baselines, with latency on-par with the fastest memory baselines.
Submission Number: 21
Loading