Adversarial Training Blocks Generalization in Neural PoliciesDownload PDF

09 Oct 2021, 14:49 (modified: 06 Dec 2021, 16:05)NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: deep reinforcement learning, generalization, adversarial, adversarial training, reinforcement learning
Abstract: Deep neural networks have made it possible for reinforcement learning algorithms to learn from raw high dimensional inputs. This jump in the progress has caused deep reinforcement learning algorithms to be deployed in many different fields from financial markets to biomedical applications. While the vulnerability of deep neural networks to imperceptible specifically crafted perturbations has also been inherited by deep reinforcement learning agents, several adversarial training methods have been proposed to overcome this vulnerability. In this paper we focus on state-of-the-art adversarial training algorithms and investigate their robustness to semantically meaningful natural perturbations ranging from changes in brightness to rotation. We conduct several experiments in the OpenAI Atari environments, and find that state-of-the-art adversarially trained neural policies are more sensitive to natural perturbations than vanilla trained agents. We believe our investigation lays out intriguing properties of adversarial training and our observations can help build robust and generalizable neural policies.
1 Reply