Data Refinement: Mitigating Reward Over-Optimization in Reinforcement Learning with Human Feedback

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: reinforcement learning with human feedback, reward overoptimization
Abstract: Reinforcement Learning with Human Feedback (RLHF) is a pivotal technique that ensures language models are closely aligned with human-centric values. The initial phase of RLHF involves learning human values using a reward model based on pairwise or $K$-wise comparisons. However, a study by Gao et al. (2022) showed that the performance of the reward model degrades after one training epoch, and optimizing too much against such reward model eventually hinders the true objective. This paper delves into these issues, using the theoretical insights to introduce improved reward learning algorithms termed "data refinement". The core idea is that during each training epoch, we not only update the model with the data but also refine the data using the model, eliminating noisy entries. Our empirical findings highlight the superior performance of this approach over the traditional methods.
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8806
Loading