Automatic Data Augmentation for Generalization in Reinforcement LearningDownload PDF

21 May 2021, 20:42 (modified: 26 Oct 2021, 15:47)NeurIPS 2021 PosterReaders: Everyone
Keywords: reinforcement learning, generalization, data augmentation, automatic machine learning
TL;DR: Learn to automatically select an augmentation from a given set, which is used to regularize the policy and value function of an RL agent. This leads to better zero-shot generalization to new task instances.
Abstract: Deep reinforcement learning (RL) agents often fail to generalize beyond their training environments. To alleviate this problem, recent work has proposed the use of data augmentation. However, different tasks tend to benefit from different types of augmentations and selecting the right one typically requires expert knowledge. In this paper, we introduce three approaches for automatically finding an effective augmentation for any RL task. These are combined with two novel regularization terms for the policy and value function, required to make the use of data augmentation theoretically sound for actor-critic algorithms. Our method achieves a new state-of-the-art on the Procgen benchmark and outperforms popular RL algorithms on DeepMind Control tasks with distractors. In addition, our agent learns policies and representations which are more robust to changes in the environment that are irrelevant for solving the task, such as the background.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
12 Replies