The Crucial Role of Samplers in Online Direct Preference Optimization

Published: 10 Oct 2024, Last Modified: 07 Dec 2024NeurIPS 2024 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: direct preference optimization, online DPO, multi-armed bandit
TL;DR: We study convergence rates of (online) DPO from optimization perspective, and show the impact of samplers through a theoretical separation and empirical experiments.
Abstract: In this paper, we provide a rigorous analysis of DPO's convergence rates with different sampling strategies under the exact gradient setting, revealing a separation: uniform sampling achieves linear convergence, while our proposed online sampler achieves quadratic convergence. We further adapt the sampler to practical settings by incorporating posterior distributions and logit mixing, demonstrating significant improvements over previous approaches. Our results not only offer insights into the theoretical standing of DPO but also pave the way for potential algorithm designs in the future.
Submission Number: 31
Loading