Keywords: adversarial robustness, adversarial training, deep neural network
Abstract: Adversarial training has become the most popular and effective strategy to improve Deep Neural Network (DNN) robustness against adversarial noises. Many adversarial training methods have been proposed in the past few years. However, most adversarial training methods are highly susceptible to hyperparameters, especially the training noise upper bound. Tuning these parameters is expensive for large datasets and difficult for people not in the adversarial robustness research domain, which prevents adversarial training techniques from being used in many application fields. This paper introduces a new adversarial training method with a gradual expansion mechanism to generate adversarial training samples, and it is parameter-free for the user. By gradually expanding the exploration range with self-adaptive and gradient-aware step size, adversarial training samples can be placed into the optimal locations in the input data space. Unlike other defense methods that usually need to fine-tune hyperparameters (e.g., training noise upper bound) by grid-search, our method has no hyperparameters for the user. We name our method Self-adaptive Margin Defense (SMD). We evaluate SMD on three publicly available datasets (CIFAR10, SVHN, and Fashion-MNIST) under the most popular adversarial attacks, AutoAttack and PGD. The results show that: (1) compared with all other competing defense methods, SMD has the best overall performance in robust accuracy on noisy data; (2) the accuracy degradation of SMD on clean data is minor among all competing defense methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
Supplementary Material: zip
4 Replies
Loading