Abstract: Neural network policies trained using Deep Reinforcement
Learning (DRL) are well-known to be susceptible to adversarial attacks.
In this paper, we consider attacks manifesting as perturbations in the
observation space managed by the external environment. These attacks
have been shown to downgrade policy performance significantly. We focus our attention on well-trained deterministic and stochastic neural
network policies in the context of continuous control benchmarks subject to four well-studied observation space adversarial attacks. To defend
against these attacks, we propose a novel defense strategy using a detectand-denoise schema. Unlike previous adversarial training approaches that
sample data in adversarial scenarios, our solution does not require sampling data in an environment under attack, thereby greatly reducing risk
during training. Detailed experimental results show that our technique
is comparable with state-of-the-art adversarial training approaches.
Loading