Understanding Adversarial Robustness of Symmetric NetworksDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Neural network-based models for vision are known to be vulnerable to various adversarial attacks. Some adversarial perturbations are model-dependent, and ex- ploit the loss function gradients of the models to make very small, pixel-wise changes. Other adversarial perturbations are model-agnostic, and include spatial transformations such as rotations, translations, scaling etc. Convolutional Neural Networks (CNNs) are translation equivariant by construction but recent work by Engstrom et al. (2017) has shown that they too are vulnerable to natural adversar- ial attacks based on rotation and translation. In this paper, we consider Group-equivariant Convolutional Neural Networks (GCNNs) proposed by Cohen & Welling (2016) that are rotation equivariant by construction, and study their robustness to adversarial attacks based on rotations as well as pixel-wise perturbations. We observe that GCNNs are robust to small degrees of rotations away from the ones present in the training data. We also observe that applying data augmentation increases their robustness.
4 Replies

Loading