Learning Optimal Advantage from Preferences and Mistaking it for Reward

Published: 29 Jun 2023, Last Modified: 04 Oct 2023MFPL OralEveryoneRevisionsBibTeX
Keywords: reinforcement learning, reward functions, preferences, regret, alignment
TL;DR: Reward learning from preferences assumes human preferences arise only from trajectory segments' sums of reward, but we consider what happens if preferences arise intead from the more supported regret preference model.
Abstract:

We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments---as used in reinforcement learning from human feedback (RLHF)---including those used to fine tune ChatGPT and other contemporary language models. Most recent work on such algorithms assumes that human preferences are generated based only upon the reward accrued within those segments, which we call their partial return function. But if this assumption is false because people base their preferences on information other than partial return, then what type of function is their algorithm learning from preferences? We argue that this function is better thought of as an approximation of the optimal advantage function, not as a partial return function as previously believed.

Submission Number: 49
1 / 1 reply shown

Paper Decision

Decisionby Program Chairs (viktor.bengs@lmu.de, bkveton@amazon.com, aadirupa.saha@gmail.com, ghavamza@google.com, +1 more)28 Jun 2023, 05:05Everyone
Decision: Accept (Oral)
Comment:

Dear Authors,

Thank you for submitting your paper to ICML 2023 Workshop “The Many Facets of Preference-Based Learning”. We are delighted to inform you that your submission has been accepted! Congratulations!

Your paper has been selected for a 15-minute oral presentation at the workshop. We will reach out to you soon to discuss further details. Note that all papers will be posted on our website and we encourage you to make revisions before that. We will provide more details on the camera-ready version in the next few days. Of course, you can also present your paper at the poster session additional to the oral presentation.

We are looking forward to seeing you at the workshop!

Sincerely,

Viktor Bengs (LMU, Germany)
Robert Busa-Fekete (Google Research)
Mohammad Ghavamzadeh (Google Research)
Branislav Kveton (AWS AI Labs)
Aadirupa Saha (Apple Research)