SGMem: Sentence Graph Memory for Long-Term Conversational Agents

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Long-term conversational agents, Memory management, Retrieval-augmented generation (RAG)
TL;DR: Our paper introduces SGMem, a sentence-graph memory framework that organizes and retrieves multi-granularity dialogue contexts to improve long-term conversational agents.
Abstract: Long-term conversational agents require effective memory management to handle dialogue histories that exceed the context window of large language models (LLMs). Existing methods based on fact extraction or summarization reduce redundancy but struggle to organize and retrieve relevant information across different granularities of dialogue and generated memory. We introduce SGMem (Sentence Graph Memory), which represents dialogue as sentence-level graphs within chunked units, capturing associations across turn-, round-, and session-level contexts. By combining retrieved raw dialogue with generated memory such as summaries,facts and insights, SGMem supplies LLMs with coherent and relevant context for response generation. Experiments on LongMemEval and LoCoMo show that SGMem consistently improves accuracy and outperforms strong baselines in long-term conversational question answering.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 6717
Loading