Recontextualization Mitigates Specification Gaming without Modifying the Specification

Published: 03 Mar 2026, Last Modified: 31 Mar 2026SPOTEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, post-training, reward hacking, scalable oversight, alignment, safety
Abstract: Developers often struggle to specify correct training labels and rewards. Perhaps they don't need to. We propose recontextualization, which reduces how often language models "game" training signals, performing misbehaviors those signals mistakenly reinforce. We show recontextualization prevents models from learning to 1) prioritize evaluation metrics over chat response quality; 2) special-case code to pass incorrect tests; and 3) overwrite evaluation functions rather than write correct code. Our method works by generating completions from prompts discouraging misbehavior and then recontextualizing them as though they were in response to prompts permitting misbehavior. Recontextualization trains language models to resist misbehavior even when instructions permit it. This mitigates the reinforcement of misbehavior from misspecified training signals, reducing specification gaming without improving the supervision signal.
Submission Number: 58
Loading