Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

Fanny Yang, Zuowen Wang, Christina Heinze-Deml

May 28, 2019 ICML 2019 Workshop Deep Phenomena Blind Submission readers: everyone
  • Keywords: adversarial training, robustness, spatial transformations, invariance
  • Abstract: This work provides theoretical and empirical evidence that adding an invariance-inducing regularizer to standard and adversarial training increases predictive accuracy for worst-case spatial transformations (spatial robustness). In fact, with the same computational budget, it achieves a relative reduction of the spatially robust error of 20% for CIFAR10, even surpassing hand-crafted spatial-equivariant networks. For SVHN, we additionally observe that regularized training improves both the standard test and robust accuracy against spatial transformations.
  • TL;DR: for spatial transformations robust minimizer also minimizes standard accuracy; invariance-inducing regularization leads to better robustness than specialized architectures
0 Replies

Loading