Automatically Finding Reward Model Biases
Track: Main Papers Track (6 to 9 pages)
Keywords: Reward models, reward model biases, black-box interpretability, automated red-teaming
TL;DR: We propose a simple automated LLM pipeline that finds novel biases of a SoTA open reward model.
Abstract: Large language model (LLM) post-training typically relies on a training signal from a reward model (RM), such as for reinforcement learning from human feedback. Previous work shows that this signal can be biased in attributes such as length, format, and sycophancy. In this work, we introduce and study the research problem of automatically finding reward model biases in natural language. We offer a simple approach of using an LLM to iteratively propose and refine candidate biases. Our method can recover known biases and surface novel ones: for example, we found that Skywork-V2-8B, a leading open-weight reward model, often mistakenly favors responses with redundant spacing and responses with hallucinated content. In addition, we show evidence that iteration provides benefits over flat best-of-N search. We hope our work contributes to further research on improving RMs through automated interpretability methods.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 38
Loading