A density estimation perspective on learning from pairwise human preferences

Published: 28 Feb 2024, Last Modified: 28 Feb 2024Accepted by TMLREveryoneRevisionsBibTeX
Authors that are also TMLR Expert Reviewers: ~Daniel_D._Johnson1
Abstract: Learning from human feedback (LHF)—and in particular learning from pairwise preferences—has recently become a crucial ingredient in training large language models (LLMs), and has been the subject of much research. Most recent works frame it as a reinforcement learning problem, where a reward function is learned from pairwise preference data and the LLM is treated as a policy which is adapted to maximize the rewards, often under additional regularization constraints. We propose an alternative interpretation which centers on the generative process for pairwise preferences and treats LHF as a density estimation problem. We provide theoretical and empirical results showing that for a family of generative processes defined via preference behavior distribution equations, training a reward function on pairwise preferences effectively models an annotator's implicit preference distribution. Finally, we discuss and present findings on "annotator misspecification"—failure cases where wrong modeling assumptions are made about annotator behavior, resulting in poorly-adapted models—suggesting that approaches that learn from pairwise human preferences could have trouble learning from a population of annotators with diverse viewpoints.
Certifications: Expert Certification
Submission Length: Long submission (more than 12 pages of main content)
Code: https://github.com/google-deepmind/pbde
Supplementary Material: zip
Assigned Action Editor: ~Greg_Durrett1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1861