A Study of Causal Confusion in Preference-Based Reward LearningDownload PDF

Published: 21 Jul 2022, Last Modified: 22 Oct 2023SCIS 2022 PosterReaders: Everyone
Keywords: causal reward confusion, preference learning, reward learning
TL;DR: We study factors that lead to causal confusion when learning reward functions from pairwise preferences.
Abstract: While there is much empirical and theoretical analysis of causal confusion and reward gaming behaviors in reinforcement learning and behavioral cloning approaches, we provide the first systematic study of causal confusion in the context of learning reward functions from preferences. We identify a set of three benchmark domains where we observe causal confusion when learning reward functions from offline datasets of pairwise trajectory preferences: a simple reacher domain, an assistive feeding domain, and an itch-scratching domain. To gain insight into this observed causal confusion, we perform a sensitivity analysis on the effect of different factors---the reward model capacity and feature dimensionality---on the robustness of rewards learned from preferences. We find evidence that learning rewards from preferences is highly sensitive and non-robust to spurious features and increasing model capacity. %, but not as sensitive to the type of training data. Videos, code, and supplemental results are available at https://sites.google.com/view/causal-reward-confusion.
Confirmation: No
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2204.06601/code)
0 Replies

Loading