Keywords: Deep Reinforcement Learning, Inverse Reinforcement Learning, Reward Learning
TL;DR: An investigation of what causes learned reward functions to fail to train new reinforcement learning agents from scratch.
Abstract: Reward functions are notoriously difficult to specify, especially for tasks with
complex goals. Reward learning approaches attempt to infer reward functions from
human feedback and preferences. Prior works on reward learning have mainly
focused on the performance of policies trained alongside the reward function.
This practice, however, may fail to detect learned rewards that are not capable of
training new policies from scratch and thus do not capture the intended behavior.
Our work focuses on demonstrating and studying the causes of these relearning
failures in the domain of preference-based reward learning. We demonstrate with
experiments in tabular and continuous control environments that the severity of
relearning failures can be sensitive to changes in reward model design and the
trajectory dataset composition. Based on our findings, we emphasize the need for
more retraining-based evaluations in the literature.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/on-the-fragility-of-learned-reward-functions/code)
0 Replies
Loading