Benchmarking and Enhancing Rational Preference Utilization for Personalized Assistants: A Pragmatic View
Keywords: LLM, Personalization
TL;DR: We propose RPA, a framework with a benchmark and method to analyze memory’s dual effects and enable rational personalization.
Abstract: Large language model (LLM)-powered assistants have recently integrated memory mechanisms that record user preferences, leading to more personalized and user-aligned responses.
However, the dual effects of personalization remain underexplored, and its adverse consequences are especially salient in real-world applications.
To address this gap, we propose Rational Personalization Acts, which reformulates memory utilization as a problem of pragmatic intent reasoning.
Building on this perspective, we develop **RPEval**, a benchmark comprising a personalized intent reasoning dataset and a multi-granularity evaluation protocol.
RPEval not only reveals the widespread phenomenon of irrational personalization in existing LLMs, but also, through a novel error pattern analysis, illustrates how irrational personalization can undermine user experience.
Finally, we introduce RP-Reasoner, which treats memory utilization as a pragmatic reasoning process, enabling the selective integration of personalized information. Experimental results demonstrate that our method significantly outperforms carefully designed baselines on \textsc{RPEval}, and resolves 80\% of the bad cases observed in a large-scale commercial personalized assistant, highlighting the potential of pragmatic reasoning to mitigate irrational personalization. Our benchmark is publicly available at \url{https://anonymous.4open.science/r/RPEval-E4B0}.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 5349
Loading