Memory Type Matters: Enhancing Long-Term Memory in Large Language Models with Hybrid Strategies

ICLR 2026 Conference Submission827 Authors

02 Sept 2025 (modified: 23 Dec 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Long-Term Memory, LLM, Conversation
Abstract: The memory capabilities of Large Language Models (LLMs) have garnered increasing attention recently. Many approaches adopt Retrieval-Augmented Generation (RAG) techniques to alleviate the “Forgetting” problem in LLMs. Despite great success achieved, existing RAG-based memory approaches typically overlook the differences between memories and employ a unified strategy to process all memories, leading to suboptimal performance. Thus, an intuitive question arises: can we categorize memory into different types and select appropriate strategies? However, given the topic-rich, scenario-complex, and boundary-blurred nature of memory scenarios, achieving precise classification of memories is not easy. To address this challenge, we propose a memory multi-class benchmark in this paper, termed TriMEM. TriMEM comprises 6,000 dialogue samples, providing precise annotations for memory types across diverse topics and scenarios. Building upon this foundation, we propose a novel memory framework, named MemoType. MemoType can adaptively identify the category of each memory and design tailored storage and retrieval strategies, thereby achieving satisfactory performance. Extensive experiments on retrieval and generation tasks demonstrate the effectiveness of the proposed approach.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 827
Loading