everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
There is increasingly widespread use of reward model learning from human preferences to align AI systems with human values, with applications including large language models, recommendation systems, and robotic control. Nevertheless, a fundamental understanding of our ability to successfully learn utility functions in this model remains limited. We initiate this line of work by studying learnability of linear utility functions from pairwise comparison queries. In particular, we consider two learning objectives. The first objective is to predict out-of-sample responses to pairwise comparisons, whereas the second is to approximately recover the true parameters of the utility function. We show that in the passive learning setting, linear utilities are efficiently learnable with respect to the first objective, both when query responses are uncorrupted by noise, and under Tsybakov noise when the distributions are sufficiently "nice". In contrast, we show that utility parameters are not learnable for a large set of data distributions without strong modeling assumptions, even when query responses are noise-free. Next, we proceed to analyze the learning problem in an active learning setting. In this case, we show that even the second objective is efficiently learnable, and present algorithms for both the noise-free and noisy query response settings. This qualitative learnability gap between passive and active learning from pairwise comparisons suggests that the tendency of conventional alignment practices to simply annotate a fixed set of queries may fail to yield effective reward model estimates, an issue that can be remedied through more deliberate query selection.