Abstract: Aligning small language models (SLMs) with human values typically involves distilling preference knowledge from large language models (LLMs).
However, existing distillation methods model preference knowledge in teacher LLMs by comparing pairwise responses, overlooking the extent of difference between responses.
This limitation prevents student SLMs from capturing the nuanced preferences for multiple responses. In this paper, we propose a Preference-Aligned Distillation (PAD) framework, which models teacher's preference knowledge as a probability distribution over all possible preferences, thereby providing more nuanced supervisory signals.
Our insight in developing PAD is rooted in the demonstration that language models can serve as reward functions, reflecting their intrinsic preference distributions.
Based on this, PAD comprises three key steps:
(1) generating diverse responses using high-temperature sampling; (2) computing rewards for both teacher and student to construct their intrinsic preference;
and (3) training the student's intrinsic preference distribution to align with the teacher's.
Experiments on four mainstream alignment benchmarks demonstrate that PAD consistently and significantly outperforms existing approaches,
achieving over 20\% improvement on AlpacaEval 2 and Arena-Hard, indicating superior alignment with human preferences.
Notably, on MT-Bench, using the \textsc{Gemma} model family, the student trained by PAD surpasses its teacher, further validating the effectiveness of our PAD.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: distillation, NLP in resource-constrained settings
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Theory
Languages Studied: English
Submission Number: 1654
Loading