Mitigating Reward Hacking with RL Training Interventions

Published: 02 Mar 2026, Last Modified: 07 Mar 2026ICLR 2026 Trustworthy AIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Reward Hacking, AI Safety, Steering, Mechanistic Interpretability
TL;DR: We present two naturalistic reward hacking environments and compare RL training interventions to mitigate reward hacking with minimal performance impact
Abstract: Reinforcement learning (RL) is central to LLM post-training, but reward functions are imperfect incentives for desired behavior and models often reward hack by exploiting loopholes in reward design. Reward hacking undermines the trustworthiness of the training process and has even been shown to generalize to broader misalignment. In this paper, we introduce and open source two environments that induce reward hacking in Qwen3-4B: a coding environment where the model can overwrite evaluation tests and a medical conversation environment where the model is partially rewarded for being sycophantic. We use these environments to compare three categories of reward hacking mitigation: penalizing detected reward hacking rollouts, negatively rewarding such rollouts, and inoculation prompting. Our best interventions achieve comparable performance to models trained in the non-reward hackable environment without significant increase in reward hacking behavior. Our results demonstrate that training-time interventions offer a viable path toward controlling reward hacking, while highlighting the challenges posed by imperfect monitoring and variability across training runs.
Submission Number: 102
Loading