When Can Proxies Improve the Sample Complexity of Preference Learning?

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We clarify conditions and propose a model parameterisation to allow proxy data to aid the learning of a target function, and analyse sample complexity.
Abstract: We address the problem of reward hacking, where maximising a proxy reward does not necessarily increase the true reward. This is a key concern for Large Language Models (LLMs), as they are often fine-tuned on human preferences that may not accurately reflect a true objective. Existing work uses various tricks such as regularisation, tweaks to the reward model, and reward hacking detectors, to limit the influence that such proxy preferences have on a model. Luckily, in many contexts such as medicine, education, and law, a sparse amount of expert data is often available. In these cases, it is often unclear whether the addition of proxy data can improve policy learning. We outline a set of sufficient conditions on proxy feedback that, if satisfied, indicate that proxy data can provably improve the sample complexity of learning the ground truth policy. These conditions can inform the data collection process for specific tasks. The result implies a parameterisation for LLMs that achieves this improved sample complexity. We detail how one can adapt existing architectures to yield this improved sample complexity.
Lay Summary: We address the challenge of *reward hacking*, where AI models, such as large language models (LLMs), optimize for proxy rewards—like human preferences—that don’t always align with the true objective. This is especially relevant when LLMs are fine-tuned using human feedback, which may be biased or incomplete. While existing methods try to reduce this issue using techniques like regularization or reward model adjustments, we focus on a different angle. In fields like medicine, education, or law, small amounts of expert data are often available alongside less reliable proxy data. It’s not always clear whether using this additional proxy feedback helps or hurts learning. We identify a set of conditions under which proxy data can *reliably* improve learning efficiency—reducing the amount of expert data needed. These findings can guide how feedback is collected and used. We also describe how to adapt current LLM architectures to benefit from these insights and achieve better learning outcomes.
Primary Area: Theory->Domain Adaptation and Transfer Learning
Keywords: AI safety; preference learning; RLHF; reward hacking; learning theory; proxy feedback
Submission Number: 10939
Loading