INMS: Memory Sharing for Large Language Model based Agents

ACL ARR 2024 December Submission1986 Authors

16 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The adaptation of Large Language Model (LLM)-based agents to execute tasks via natural language prompts represents a significant advancement, notably eliminating the need for explicit retraining or fine tuning, but are constrained by the comprehensiveness and diversity of the provided examples, leading to outputs that often diverge significantly from expected results, especially when it comes to the open-ended questions. Although Retrieval-Augmented Generation (RAG) can effectively address this problem, its implementation may be hindered by the scarcity of suitable external databases or the insufficiency and obsolescence of examples in existing databases. This work aims to address the problem of external datasets shortage and obsolescent for databases. We proposed a novel INteractive Memory Sharing framework, which integrates the real-time memory filter, storage and retrieval to enhance the In-Context Learning process. This framework allows for the sharing of memories among agents, whereby the interactions and shared memories between agents effectively enhance the diversity of the memories. The collective self-enhancement through interactive learning among agents facilitates the evolution from individual intelligence to collective intelligence. Besides, the dynamically growing memory pool is utilized not only to improve the quality of responses but also to train and enhance the retriever in real-time. Extensive experiments on three distinct domains involving specialized agents demonstrate that the INMS framework significantly improves the agents' performance in addressing open-ended questions.
Paper Type: Long
Research Area: Generation
Research Area Keywords: automatic evaluation, few-shot generation, domain adaptation, text-to-text generation, retrieval-augmented generation, interactive and collaborative generation
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1986
Loading