Abstract: Direct alignment from preferences (DAP) has emerged as a promising paradigm for aligning large language models (LLMs) to human desiderata from pre-collected, offline preference datasets. While recent studies indicate that existing offline DAP methods can directly benefit from online training samples, we highlight the need to develop specific online DAP algorithms to fully harness the power of online training. Specifically, we identify that the learned LLM should adhere to the proximity of the *behavior LLM*, which collects the training samples. To this end, we propose online **P**reference **O**ptimization in proximity to the **B**ehavior LLM ($\mathcal{B}$PO),
emphasizing the importance of constructing a proper trust region for LLM alignment.
We conduct extensive experiments to validate the effectiveness and applicability of our approach by integrating it with various DAP methods, resulting in significant performance improvements across a wide range of tasks when training with the same amount of preference data. Even when only introducing *one* additional data collection phase, our online $\mathcal{B}$PO improves its offline DAP baseline from *72.0%* to *80.2%* on TL;DR and from *82.2%* to *89.1%* on Anthropic Helpfulness in terms of win rate against human reference text.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: DPO, alignment, reinforcement from human feedback
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1938
Loading