Abstract: We show that simple spatial transformations, namely translations and rotations alone, suffice to fool neural networks on a significant fraction of their inputs in multiple image classification tasks. Our results are in sharp contrast to previous work in adversarial robustness that relied on more complicated optimization ap- proaches unlikely to appear outside a truly adversarial context. Moreover, the misclassifying rotations and translations are easy to find and require only a few black-box queries to the target model. Overall, our findings emphasize the need to design robust classifiers even for natural input transformations in benign settings.
Keywords: robustness, spatial transformations, invariance, rotations, data augmentation, robust optimization
TL;DR: We show that CNNs are not robust to simple rotations and translation and explore methods of improving this.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/a-rotation-and-a-translation-suffice-fooling/code)
10 Replies
Loading