Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Understanding Adversarial Robustness of Symmetric Networks
Sandesh Kamath, Amit Deshpande
Feb 12, 2018 (modified: Feb 12, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:Neural network-based models for vision are known to be vulnerable to various
adversarial attacks. Some adversarial perturbations are model-dependent, and ex-
ploit the loss function gradients of the models to make very small, pixel-wise
changes. Other adversarial perturbations are model-agnostic, and include spatial
transformations such as rotations, translations, scaling etc. Convolutional Neural
Networks (CNNs) are translation equivariant by construction but recent work by
Engstrom et al. (2017) has shown that they too are vulnerable to natural adversar-
ial attacks based on rotation and translation.
In this paper, we consider Group-equivariant Convolutional Neural Networks
(GCNNs) proposed by Cohen & Welling (2016) that are rotation equivariant by
construction, and study their robustness to adversarial attacks based on rotations
as well as pixel-wise perturbations. We observe that GCNNs are robust to small
degrees of rotations away from the ones present in the training data. We also
observe that applying data augmentation increases their robustness.
Enter your feedback below and we'll get back to you as soon as possible.