Keywords: Personalization, Large Language Models, Agents
TL;DR: PersonaAgent introduces a unified memory–action framework with test-time alignment that enables LLM agents to sustain personalized, coherent, and adaptive multi-turn interactions.
Abstract: Large Language Model (LLM)-powered agents have emerged as a new paradigm for complex, multi-turn human-AI interactions, yet most existing systems adopt a one-size-fits-all approach, neglecting the evolving preferences and goals of individual users. This limitation hinders their ability to maintain alignment and coherence over extended multi-turn interactions and dynamic tasks. To address this gap, we propose PersonaAgent, the first personalized LLM agent framework explicitly designed for multi-turn, long-horizon personalization and alignment. Specifically, PersonaAgent integrates two complementary components: a personalized memory module that includes episodic and semantic memory mechanisms; a personalized action module that enables the agent to perform tool actions tailored to the user. At the core, the persona (defined as unique system prompt for each user) functions as an intermediary: it leverages insights from personalized memory to control agent actions, while the outcomes of these actions in turn refine the memory. Based on the framework, we propose a test-time user-preference alignment strategy that simulate the latest multi-turn interactions to optimize the persona prompt, ensuring user preference alignment through textual loss feedback between simulated and ground-truth responses. Experimental evaluations demonstrate that PersonaAgent significantly outperforms other baseline methods in diverse multi-turn scenarios and demonstrate scaling law in test-time user preference alignment. These results underscore that PersonaAgent offers a pathway toward human-centered LLM agents capable of coherent and personalized multi-turn interaction with users.
Submission Number: 169
Loading