Adaptive Friend Agent: Personalized Multi-User Memory for Conversational AI

ICLR 2026 Conference Submission21088 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-user personalization, Memory-augmented conversational agents, Datasets and benchmarks, Large language models (LLMs)
Abstract: Most conversational AI systems today are designed to engage a single user, which limits their effectiveness in real-world, multi-user settings. This work presents the Adaptive Friend Agent (AFA), a personalized conversational agent framework that supports long-term, user-specific interaction across multiple individuals. AFA integrates off-the-shelf speaker recognition to distinguish users by voice, retrieves relevant conversational memory from a per-user vector database, and generates personalized responses using a large language model (LLM). To train and evaluate the system, Personalized Agent chaT (PAT) is introduced, a large-scale synthetic dataset simulating human-AI persona-grounded conversations. PAT includes over 58,000 dialogue turns covering diverse scenarios and user profiles. Experimental results show that AFA, when fine-tuned using PAT with the LLaMA-70B model, outperforms strong commercial and ablated baselines on BLEU and ROGUE metrics. Ablation studies confirm the critical role of the memory module and speaker identification in supporting coherent and personalized dialogue. AFA represents a practical step toward scalable conversational agents capable of adapting to individual users in shared environments.
Primary Area: datasets and benchmarks
Submission Number: 21088
Loading