Alignment from Ranking and Rating Information

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: direct preference optimization, alignment, sample complexity guarantees
Abstract: The class of direct preference optimization (DPO) algorithms has emerged as a promising approach for solving the alignment problem in foundation models. These algorithms work with very limited feedback in the form of pairwise preferences and fine-tune models to align with these preferences without explicitly learning a reward model. While the form of feedback used by these algorithms makes the data collection process easy, its ambiguity in terms of the quality of responses has significant negative implications, including incentivizing policies that favor out-of- distribution responses, a phenomenon referred to as likelihood displacement. In this paper, we study how DPO-style algorithms can leverage additional information in the form of rating gap, which informs the learner how much the preferred response is better than the rejected one. We present new algorithms that can achieve faster statistical rates than DPO in presence of accurate rating gap information. Moreover, we theoretically prove and empirically show that the performance of our algorithms is robust to inaccuracy in rating gaps. Finally, we demonstrate the solid performance of our algorithms in comparison to a number of DPO-style algorithms across a wide range of LLMs and evaluation benchmarks.
Primary Area: reinforcement learning
Submission Number: 11484
Loading