Keywords: self-play preference optimization, DPO, sample selection
TL;DR: We study the role of prompts in self-play optimization pipeline.
Abstract: Self-play preference optimization has emerged as a prominent paradigm for aligning large language models (LLMs).
It typically involves a language model to generate on-policy responses for prompts and a reward model (RM) to guide the selection of chosen and rejected responses, which can be further trained with direct preference optimization (DPO).
However, the role of prompts remains underexplored, despite being a core component in this pipeline.
In this work, we investigate how prompts of varying difficulty influence self-play preference optimization.
We first use the mean reward of $N$ sampled responses of a prompt as a proxy for its difficulty.
We find that difficult prompts exhibit substantially inferior self-play optimization performance in comparison to easy prompts for language models.
Moreover, incorporating difficult prompts into training fails to enhance overall performance and, in fact, leads to slight degradation compared to training on easy
prompts alone.
We also observe that the performance gap between difficult and easy prompts closes as the model capacity increases, suggesting that difficulty interacts with the model capacity.
Building on these findings, we explore strategies to mitigate the negative effect of difficult prompts on final performance.
We demonstrate that selectively removing an appropriate portion of challenging prompts enhances overall self-play performance, while also reporting failed attempts and lessons learned.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 19460
Loading