Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?Download PDF

05 Oct 2022 (modified: 17 Nov 2024)Offline RL Workshop NeurIPS 2022Readers: Everyone
Keywords: Offline RL, Causal Confusion, Active Learning, Active Sampling
TL;DR: This paper investigates whether active sampling can alleviate causal confusion in offline RL.
Abstract: Causal confusion is a phenomenon where an agent learns a policy that reflects imperfect spurious correlations in the data. Such a policy may falsely appear to be optimal during training if most of the training data contains such spurious correlations. This phenomenon is particularly pronounced in domains such as robotics, with potentially large gaps between open- and closed-loop performance of an agent. In such settings, causally confused models may appear to perform well according to open-loop metrics during training but fail catastrophically when deployed in the real world. In this paper, we investigate whether selectively sampling appropriate points from the dataset may enable offline RL agents to disambiguate the underlying causal mechanisms of the environment, alleviate causal confusion in offline reinforcement learning, and produce a safer model for deployment. To answer this question, we consider a set of tailored offline reinforcement learning datasets that exhibit causal ambiguity and assess the ability of active sampling techniques to reduce causal confusion at evaluation. We provide empirical evidence that uniform and active sampling techniques are able to consistently reduce causal confusion as training progresses and that active sampling is able to do so significantly more efficiently than uniform sampling.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/can-active-sampling-reduce-causal-confusion/code)
2 Replies

Loading