Bag of Tricks for FGSM Adversarial TrainingDownload PDF

16 May 2022 (modified: 03 Jul 2024)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: adversarial training
Abstract: Adversarial training (AT) with samples generated by Fast Gradient Sign Method (FGSM), also known as FGSM-AT, is a computationally simple method to train robust networks. However, during its training procedure, an unstable mode of ``catastrophic overfitting'' has been identified in~\cite{Wong2020FastIB}, where the robust accuracy abruptly drops to zero within a single training step. Existing methods use gradient regularizers or random initialization tricks to attenuate this issue, whereas they either take high computational cost or lead to lower robust accuracy. In this work, we provide the first study which thoroughly examines a collection of tricks from three perspectives: Data Initialization, Network Structure, and Optimization, to overcome the catastrophic overfitting in FGSM-AT. Surprisingly, we find that simple tricks, i.e., masking partial pixels (even without randomness), setting a large convolution stride and smooth activation functions, or regularizing the weights of the first convolutional layer can effectively tackle the overfitting issue. Extensive results on a range of network architectures validate the effectiveness of each proposed tricks, and the combinations of tricks are also investigated. For example, trained with PreActResNet-18 on CIFAR-10, our method attains 51.3\% accuracy against PGD-10 attacker and 46.4\% accuracy against AutoAttack, demonstrating that pure FGSM-AT is capable of enabling robust learners. We will release our code to encourage future exploration on unleashing the potential of FGSM-AT.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/bag-of-tricks-for-fgsm-adversarial-training/code)
18 Replies

Loading