Keywords: personalized generation, reasoning, feedback incorporation, ambiguity
TL;DR: explores LLMs' behavior to perform consistent personalized generation by reasoning about user feedback. Proposed a memory coreset method to improve
Abstract: This paper explores LLMs' ability to perform consistent personalized generation incorporating user feedback. We first show that it is challenging for LLMs to (1) utilize feedback consistently in long conversations, (2) reason about multiple partial or conflicting feedback, and (3) adapt to changing preferences within a conversation. These challenges show that input information selection is crucial for improving multi-turn LLM performance. We propose a novel solution of building a **CoreSet** of past conversations, a principled approach of personalization. In addition to addressing the long history, conflict, and preference change challenges, coresets are an effective way to reduce input tokens, making these services more cost-effective. We show that our coreset algorithm improves upon state-of-the-art methods on both synthetic and real-world ambiguity datasets compared to memory and personalization benchmarks.
Submission Number: 157
Loading