Estimating Effects of Tokens in Preference Learning

Published: 30 Oct 2024, Last Modified: 13 Dec 2024LanGame SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: causal effect, RLHF, preference learning
Abstract: Recently, it was shown that the advantage function in reinforcement learning (RL) can be interpreted as the causal effect of actions on the return. In the present work, we first cast the problem of RL from human feedback (RLHF) with pairwise preference data as a two-player game and generalize Direct Advantage Estimation, a method for estimating the advantage function, to this natural language setting. This enables us to quantify and estimate the causal effects of tokens on the preference. We apply our method to the Anthropic HH-RLHF dataset and demonstrate that our method can estimate the effect of individual tokens on the overall preference.
Submission Number: 3
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview