Keywords: Direct Preference Optimization, Bradley-Terry Model, Large Language Models
Abstract: Direct preference optimization (DPO) has emerged as a promising approach for aligning large language models (LLMs) with human preferences. However, the widespread reliance on the response-level Bradley-Terry (BT) model may limit its full potential, as the reference and learnable models are assumed to be autoregressive only after deriving the objective function. Motivated by this limitation, we revisit the theoretical foundations of DPO and propose a novel formulation that explicitly introduces the autoregressive assumption prior to applying the BT model. Specifically, we first reformulate the origin of DPO using two Boltzmann distributions with reward-based energies defined over the output (response) space $\mathcal{Y}$. We then extend the energy domain from $\mathcal{Y}$ to its prefix closure $\mathcal{Y}^{*}$. Interestingly, this simple extension naturally leads to the energy definitions with an autoregressive reference model, the prefix-wise BT model, and ultimately, a novel DPO variant called Autoregressive DPO (ADPO) with its corresponding loss function. Without violating the theoretical foundations, the derived loss takes an elegant form: it shifts the summation operation in the DPO objective outside the log-sigmoid function. Furthermore, through theoretical analysis of ADPO, we show that there exist two length measures to be considered when designing DPO-based algorithms: the token length $\mu$ and the feedback length $\mu'$. To the best of our knowledge, we are the first to explicitly distinguish these two measures and analyze their implications for preference optimization in LLMs.
Primary Area: reinforcement learning
Submission Number: 15529
Loading