TL;DR: We study the relationship between the expected error of a reward model on some data distribution and the the reget of policies trained using the reward model.
Abstract: In reinforcement learning, specifying reward functions that capture the intended task can be very challenging. Reward learning aims to address this issue by *learning* the reward function. However, a learned reward model may have a low error on the data distribution, and yet subsequently produce a policy with large regret. We say that such a reward model has an *error-regret mismatch*. The main source of an error-regret mismatch is the distributional shift that commonly occurs during policy optimization. In this paper, we mathematically show that a sufficiently low expected test error of the reward model guarantees low worst-case regret, but that for any *fixed* expected test error, there exist realistic data distributions that allow for error-regret mismatch to occur. We then show that similar problems persist even when using policy regularization techniques, commonly employed in methods such as RLHF. We hope our results stimulate the theoretical and empirical study of improved methods to learn reward models, and better ways to measure their quality reliably.
Lay Summary: Teaching AI complex tasks often involves first using examples to train a "reward function" – the AI’s guide for good and bad actions. However, the reward function might look accurate on the examples (low error) but still encourage poor AI decisions in practice (high regret). This "error-regret mismatch" occurs when the AI can exploit loopholes where the reward function is wrong.
We mathematically investigated why and when this mismatch happens. We discovered that even if a reward function seems well-trained, the AI can still perform badly if the reward function’s training examples have certain "unsafe" characteristics – essentially, when they don’t cover important scenarios. Our work precisely defines what makes a set of training examples "unsafe" and shows that common safety techniques, like adding penalties to discourage extreme behaviors, don't always fix this fundamental issue.
As AI systems increasingly learn their tasks from data, understanding this error-regret mismatch is crucial. Our findings explain a key reason why an AI might act in undesirable ways. This knowledge helps pave the way for developing AI systems that learn more reliably and behave more consistently with our intended goals, which is vital for building AI we can trust.
Primary Area: Social Aspects->Safety
Keywords: Reward learning, RLHF, RL, Safety, Distributional shift, Generalization, Learning Theory
Submission Number: 11860
Loading