Pareto-Optimal Learning from Preferences with Hidden Context

Published: 07 Aug 2024, Last Modified: 07 Aug 2024RLSW 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: Yes
Keywords: Pareto Optimality, Reinforcement Learning from Human Feedback, Lexicase Selection, Hidden Context
TL;DR: Creating a set of reward models or policies that are Pareto-optimal with respect to preferences aids in overcoming issues caused by hidden context in preferences.
Abstract: Ensuring AI models align with human values is essential for their safety and functionality. Reinforcement learning from human feedback (RLHF) uses human preferences to achieve this alignment. However, preferences sourced from diverse populations can result in point estimates of human values that may be sub-optimal or unfair to specific groups. We propose Pareto Optimal Preference Learning (POPL), which frames discrepant group preferences as objectives with potential trade-offs, aiming for policies that are Pareto-optimal on the preference dataset. POPL utilizes Lexicase selection, an iterative process to select diverse and Pareto-optimal solutions. Our empirical evaluations demonstrate that POPL surpasses baseline methods in learning sets of reward functions, effectively catering to distinct groups without access to group numbers or membership labels. Furthermore, we illustrate that POPL can serve as a foundation for techniques optimizing specific notions of group fairness, ensuring inclusive and equitable AI model alignment.
Submission Number: 13
Loading