Abstract: Traditional Large Language Models (LLMs) are typically designed to interact with a single person and respond with a personalized answer tailored to that individual. This results in limited multi-user interaction, making it impractical for shared environments such as households and workplaces. %Additionally, LLMs fail to personalize responses for individual users and lose context in multi-user settings.
In this paper, we introduce an Adaptive Friend Agent (AFA), a personalized LLM framework capable of identifying different individuals using voice recognition and providing personalized responses while preserving each person's conversation history. AFA integrates the capabilities of SpeechBrain, a voice recognition software to perform the identification and distinction between individuals, a Vector Database (VectorDB) to store personalized information as well as conversation history for each user interacting with the model, along with fine-tuned LLM that accesses these individual databases to generate personalized responses. Additionally, we introduce Personalized Agent chaT (PAT), a synthetically generated dataset containing dialogues between a personalized AI assistant and users, each with unique personality traits, across 12 everyday use cases where individuals interact with LLMs. The PAT dataset is used to fine-tune the LLM and later serves as ground truth for evaluating our fine-tuned model and other state-of-the-art LLMs. Experimental results demonstrate that our model outperforms existing models in user identification and personalized response generation, achieving highest accuracy, with a BLEU-1 score of 81.31\% and ROUGE-1 score of 43.04\%. The complete code and data available in anonymous repository \href{https://anonymous.4open.science/r/PAT-6110/README.md}{Link}
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Language Modeling, Dialogue and Interactive Systems, Human-Centered NLP, Interpretability and Analysis of Models for NLP, Resources and Evaluation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources, Data analysis
Languages Studied: English
Submission Number: 6648
Loading