RLHF and IIA: Perverse Incentives

Published: 17 Jun 2024, Last Modified: 02 Jul 2024ICML 2024 Workshop MHFAIA OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RLHF, independence of irrelevant alternatives, reinforcement learning, language modeling
TL;DR: Existing algorithms for RLHF can incentivize responses at odds with preferences because they are based on models that assume independence of irrelevant alternatives.
Abstract: Existing algorithms for reinforcement learning from human feedback (RLHF) can incentivize responses at odds with preferences because they are based on models that assume independence of irrelevant alternatives (IIA). The perverse incentives induced by IIA hinder innovations on query formats and learning algorithms.
Submission Number: 5
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview