EFFICIENT TWO-STEP ADVERSARIAL DEFENSE FOR DEEP NEURAL NETWORKSDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: In recent years, deep neural networks have demonstrated outstanding performancein many machine learning tasks. However, researchers have discovered that thesestate-of-the-art models are vulnerable to adversarial examples: legitimate examples added by small perturbations which are unnoticeable to human eyes. Adversarial training, which augments the training data with adversarial examples duringthe training process, is a well known defense to improve the robustness of themodel against adversarial attacks. However, this robustness is only effective tothe same attack method used for adversarial training. Madry et al. (2017) suggest that effectiveness of iterative multi-step adversarial attacks and particularlythat projected gradient descent (PGD) may be considered the universal first order adversary and applying the adversarial training with PGD implies resistanceagainst many other first order attacks. However, the computational cost of theadversarial training with PGD and other multi-step adversarial examples is muchhigher than that of the adversarial training with other simpler attack techniques.In this paper, we show how strong adversarial examples can be generated only ata cost similar to that of two runs of the fast gradient sign method (FGSM), allowing defense against adversarial attacks with a robustness level comparable to thatof the adversarial training with multi-step adversarial examples. We empiricallydemonstrate the effectiveness of the proposed two-step defense approach againstdifferent attack methods and its improvements over existing defense strategies.
Keywords: Adversarial Examples, Adversarial Training, FGSM, IFGSM, Robustness
TL;DR: We proposed a time-efficient defense method against one-step and iterative adversarial attacks.
Data: [MNIST](https://paperswithcode.com/dataset/mnist), [SVHN](https://paperswithcode.com/dataset/svhn)
7 Replies

Loading