Assessing Deep Reinforcement Learning Policies via Natural Corruptions at the Edge of ImperceptibilityDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Deep reinforcement learning algorithms have recently achieved significant success in learning high-performing policies from purely visual observations. The ability to perform end-to-end learning from raw high dimensional input alone has led to deep reinforcement learning algorithms being deployed in a variety of fields. Thus, understanding and improving the ability of deep reinforcement learning policies to generalize to unseen data distributions is of critical importance. Much recent work has focused on assessing the generalization of deep reinforcement learning policies by introducing specifically crafted adversarial perturbations to their inputs. In this paper, we approach this problem from another perspective and propose a framework to assess the generalization skills of trained deep reinforcement learning policies. Rather than focusing on worst-case analysis of distribution shift, our approach is based on black-box perturbations that correspond to minimal semantically meaningful natural changes to the environment or the agent's visual observation system ranging from brightness to compression artifacts. We demonstrate that the perceptual similarity distance of the minimal natural perturbations is orders of magnitude smaller than the perceptual similarity distance of the adversarial perturbations to the unperturbed observations (i.e. minimal natural perturbations are perceptually more similar to the unperturbed states than the adversarial perturbations), while causing larger degradation in the policy performance. Furthermore, we investigate state-of-the-art adversarial training methods and show that adversarially trained deep reinforcement learning policies are more sensitive to almost all of the natural perturbations compared to vanilla trained policies. Lastly, we highlight that our framework captures a diverse set of bands in the Fourier spectrum; thus providing a better overall understanding of the policy's generalization capabilities. We believe our work can be crucial towards building resilient and generalizable deep reinforcement learning policies.
Supplementary Material: zip
13 Replies

Loading