Keywords: Reinforcement Learning with Human Feedback, Reward Modeling, Deep Neural Networks, Learning Guarantee, Clear Human Beliefs
Abstract: In this work, we study the learning theory of reward modeling using pairwise comparison data and deep neural networks. We establish a novel non-asymptotic regret bound for deep reward estimators in a non-parametric setting, which depends explicitly on the network architecture. Furthermore, to underscore the critical importance of clear human beliefs, we introduce a margin-type condition requiring the conditional winning probability of the optimal action in pairwise comparisons to be significantly distanced from 1/2. This condition enables a sharper regret bound, which substantiates the empirical efficiency in Reinforcement Learning from Human Feedback (RLHF) and highlights the role of clear human beliefs in its success. Notably, this improvement stems from high-quality pairwise comparison data under the margin-type condition and is independent of the specific estimators used, making it applicable to various learning algorithms and models.
Supplementary Material: zip
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8930
Loading