Delving into adversarial attacks on deep policiesDownload PDF

19 Apr 2024 (modified: 12 Mar 2017)ICLR 2017 workshop submissionReaders: Everyone
Abstract: Adversarial examples have been shown to exist for a variety of deep learning architectures. Deep reinforcement learning has shown promising results on training agent policies directly on raw inputs such as image pixels. In this paper we present a novel study into adversarial attacks on deep reinforcement learning polices. We compare the effectiveness of the attacks using adversarial examples vs. random noise. We present a novel method for reducing the number of times adversarial examples need to be injected for a successful attack, based on the value function. We further explore how re-training on random noise and FGSM perturbations affects the resilience against adversarial examples.
TL;DR: Study into adversarial attacks on deep reinforcemnt learning policies.
Conflicts: berkeley.edu
4 Replies

Loading