Abstract: Memory-augmented Large Language Models (LLMs) can recall and reason on recalled past contexts (named recall-reason step). However, multiple recall-reason steps may produce biased thoughts, i.e., inconsistent reasoning paths over the same recalled results. Motivated by that humans only memorize the metacognition thoughts in mind rather than event details, we propose a novel memory-augmented framework called Think-in-Memory (TiM) to flexibly utilize the historical context. Concretely, we formulate a self-organizing memory mechanism equipped with a metacognition space and stationary operation actions, leveraging role-playing LLM agents to achieve thought generator, retriever, and organizer. Supported by such multi-agent self-organization, TiM can imitate human-level metacognition to memorize and update history context as metacognition thoughts without suffering from reasoning inconsistency. TiM can process ultra-long history context in a plug-and-play paradigm to benefit downstream interactions. To conduct evaluations under more complex tasks, clinical diagnosis is adopted as the evaluation task: (1) we formulate a role-play simulator to simulate long-term interactions between the doctor and patient. (2) we collect a multi-turn medical consultations dataset from the real-world hospitals. Besides, two daily conversation datasets are also involved. Experiments demonstrate that our method achieves remarkable improvements on memory-augmented long-term dialogues about both daily and medical topics.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: applications ,knowledge augmented, retrieval
Contribution Types: NLP engineering experiment
Languages Studied: English, Chinese
Submission Number: 1458
Loading