Keywords: Large Language Models, RLHF, Personalization
TL;DR: We propose P-RLHF - a personalized RLHF framework for learning from personalized human feedback.
Abstract: Personalized large language models (LLMs) are designed to tailor responses to individual user preferences. While Reinforcement Learning from Human Feedback (RLHF) is a commonly used framework for aligning LLMs with human preferences, vanilla RLHF assumes that all human preferences share the same distribution, preventing fine-tuned LLMs from generating personalized content when user preferences are diverse. In this work, we propose Personalized-RLHF (P-RLHF), an efficient framework that utilizes a lightweight user model to capture individual user preferences and jointly learns the user model and the personalized LLM from human feedback. P-RLHF exhibits the following three characteristics: It (1) enables an LLM to generate personalized content and scale efficiently with growing number of users; (2) handles both explicit user preferences described as textual input and implicit user preferences encoded in the feedback data; and (3) eliminates the need for users to fully articulate their preferences, which are normally needed for prompting LLMs to generate personalized content yet are often impractical to obtain in real-world scenarios. Our empirical results show that personalized LLMs trained using P-RLHF generate content more closely aligned with individual user preferences, outperforming vanilla, non-personalized RLHF across different tasks.
Submission Number: 110
Loading