Track: Technical
Keywords: Machine+learning, large+language+model, LLM, GPT-4o, GPT-4o-mini, Deception, Specification+gaming, reward+hacking, in-context+reinforcement+learning, in-context+learning, iterative+refinement
TL;DR: We show that even without fine-tuning, in-context iterative reflection methods enable "HHH" LLMs (like GPT-4o and GPT-4o-mini) to discover specification gaming policies, and that fine-tuning on a curriculum of tasks further increases this propensity.
Abstract: Previous work has shown that training research-purpose “helpful-only” LLMs with reinforcement learning on a curriculum of gameable environments can lead models to generalize to egregious specification gaming, such as editing their own reward function or modifying task checklists to appear more successful. We show that gpt-4o and gpt-4o-mini — frontier models trained to be helpful, harmless, and honest — can engage in specification gaming without training on a curriculum of tasks, purely from in-context iterative reflection (which we call in-context reinforcement learning, “ICRL”). We also show that, compared to the naive version of the expert iteration reinforcement learning algorithm, including ICRL in expert iteration increases gpt-4o-mini's propensity to learn specification-gaming policies, generalizing to the most egregious strategy where gpt-4o-mini edits its own reward function. Our results point toward the strong ability of in-context reflection to discover rare specification-gaming strategies that models might not exhibit zero-shot or with normal training, highlighting the need for caution when relying on alignment of LLMs in zero-shot settings.
Submission Number: 113
Loading