A Rotation and a Translation Suffice: Fooling CNNs with Simple TransformationsDownload PDF

27 Sept 2018 (modified: 21 Apr 2024)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: We show that simple spatial transformations, namely translations and rotations alone, suffice to fool neural networks on a significant fraction of their inputs in multiple image classification tasks. Our results are in sharp contrast to previous work in adversarial robustness that relied on more complicated optimization ap- proaches unlikely to appear outside a truly adversarial context. Moreover, the misclassifying rotations and translations are easy to find and require only a few black-box queries to the target model. Overall, our findings emphasize the need to design robust classifiers even for natural input transformations in benign settings.
Keywords: robustness, spatial transformations, invariance, rotations, data augmentation, robust optimization
TL;DR: We show that CNNs are not robust to simple rotations and translation and explore methods of improving this.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1712.02779/code)
10 Replies

Loading