Clear Preferences Leave Traces: Reference Model-Guided Sampling for Preference Learning

Published: 13 Dec 2024, Last Modified: 19 Feb 2025Good-DataEveryoneRevisionsBibTeXCC BY 4.0
Student Lead Author Indication: Yes
Keywords: Preference Learning, Reference model, Sampling strategy, Alignment, Direct Preference Optimization.
TL;DR: Reference models can naturally detect preference clarity, letting us train better with less data and achieving higher performance.
Abstract: Direct Preference Optimization (DPO) has emerged as a de-facto approach for aligning language models with human preferences. Recent work has shown DPO’s effectiveness relies on training data quality. In particular, clear quality differences between preferred and rejected responses enhance learning performance. Current methods for identifying and obtaining such high-quality samples demand additional resources or external models. We discover that reference model probability space naturally detects high-quality training samples. Using this insight, we present a sampling strategy that achieves consistent improvements (+0.1 to +0.4) on MT-Bench while using less than half (30-50%) of the training data. We observe substantial improvements (+0.4 to +0.98) for technical tasks (coding, math, and reasoning) across multiple models and hyperparameter set.
Submission Number: 21
Loading