Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint

ICLR 2024 Workshop ME-FoMo Submission58 Authors

Published: 04 Mar 2024, Last Modified: 20 Apr 2024ME-FoMo 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RL theory, RLHF
TL;DR: A reverse-KL regularized contextual bandit formulation for RLHF.
Abstract: This paper studies the theoretical framework of the alignment process of generative models with Reinforcement Learning from Human Feedback (RLHF). We consider a standard mathematical formulation, the reverse-KL regularized contextual bandit for RLHF. Despite its widespread practical application, a rigorous theoretical analysis of this formulation remains open. We investigate its behavior in three distinct settings---offline, online, and hybrid---and propose efficient algorithms with finite-sample theoretical guarantees. Moving towards practical applications, our framework, with a robust approximation of the information-theoretical policy improvement oracle, naturally gives rise to several novel RLHF algorithms. This includes an iterative version of the Direct Preference Optimization (DPO) algorithm for online settings, and a multi-step rejection sampling strategy for offline scenarios. Our empirical evaluations on real-world alignment experiment of large language model demonstrate that these proposed methods significantly surpass existing strong baselines, such as DPO and Rejection Sampling Optimization (RSO), showcasing the connections between solid theoretical foundations and their potent practical implementations.
Submission Number: 58
Loading