Action-Dependent Optimality-Preserving Reward Shaping

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Previous methods for adapting shaping rewards/intrinsic motivation to prevent reward hacking don't scale well to complex environments; we invented one that does.
Abstract: Recent RL research has utilized reward shaping--particularly complex shaping rewards such as intrinsic motivation (IM)--to encourage agent exploration in sparse-reward environments. While often effective, ``reward hacking'' can lead to the shaping reward being optimized at the expense of the extrinsic reward, resulting in a suboptimal policy. Potential-Based Reward Shaping (PBRS) techniques such as Generalized Reward Matching (GRM) and Policy-Invariant Explicit Shaping (PIES) have mitigated this. These methods allow for implementing IM without altering optimal policies. In this work we show that they are effectively unsuitable for complex, exploration-heavy environments with long-duration episodes. To remedy this, we introduce Action-Dependent Optimality Preserving Shaping (ADOPS), a method of converting intrinsic rewards to an optimality-preserving form that allows agents to utilize IM more effectively in the extremely sparse environment of Montezuma's Revenge. We also prove ADOPS accommodates reward shaping functions that cannot be written in a potential-based form: while PBRS-based methods require the cumulative discounted intrinsic return be independent of actions, ADOPS allows for intrinsic cumulative returns to be dependent on agents' actions while still preserving the optimal policy set. We show how action-dependence enables ADOPS's to preserve optimality while learning in complex, sparse-reward environments where other methods struggle.
Lay Summary: A lot of reinforcement learning problems are either very difficult or impossible to learn without giving the agent "hints" of some kind, to help it know what sorts of actions to try. A lot of the time though, the agent can learn to "hack" these hints that we give it, and optimize for them at the expense of whatever task we $\textit{actually}$ want it to accomplish. There are some prior mathematical tricks for taking a set of hints that might be "hackable" and converting them to a form that we can guarantee isn't hackable. However, these previous methods haven't yet been tested in really hard-to-learn environments. We test them in a really hard-to-learn environment, and find that, even though they technically keep the hints from being hackable, they also make the hints worse at guiding the agent, to the extent that it fails to learn well at all. To remedy this, we develop a provably more general method of converting hints to a form that we can show mathematically can't be hacked. We also demonstrate empirically that our method helps the agent learn a better policy faster, in this same difficult environment where prior methods fail.
Primary Area: Reinforcement Learning
Keywords: Reinforcement learning, reward shaping, intrinsic motivation, potential-based reward shaping, exploration
Submission Number: 7331
Loading