Keywords: Large language model, Preference alignment
TL;DR: We propose the Confidence-Weighted Preference Optimization (CW-PO) for preference alignment, which effectively leverages weak LLMs as annotators.
Abstract: Preference alignment is an essential step in adapting large language models (LLMs) to human values, but existing approaches typically depend on costly human annotations or large-scale API-based models. We explore whether a weak LLM can instead act as an effective annotator. We surprisingly find that selecting only a subset of a weak LLM's highly confident samples leads to substantially better performance than using full human annotations. Building on this insight, we propose **C**onfidence-**W**eighted **P**reference **O**ptimization (CW-PO), a general framework that re-weights training samples by a weak LLM’s confidence and can be applied across different preference optimization objectives. Notably, the model aligned by CW-PO with just 20\% of human annotations outperforms the model trained with 100\% of annotations under standard DPO. These results suggest that weak LLMs, when paired with confidence weighting, can dramatically reduce the cost of preference alignment while even outperforming methods trained on fully human-labeled data.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 20342
Loading