Uncovering Universal Features: How Adversarial Training Improves Adversarial TransferabilityDownload PDF

Published: 21 Jun 2021, Last Modified: 05 May 2023ICML 2021 Workshop AML PosterReaders: Everyone
Keywords: adversarial examples, transferability, universality
TL;DR: slightly-robust neural networks can be used to generate SOTA targeted transferable adversarial examples; effectiveness can be explained by a universality principle
Abstract: Adversarial examples for neural networks are known to be transferable: examples optimized to be misclassified by a “source” network are often misclassified by other “destination” networks. Here, we show that training the source network to be “slightly robust”---that is, robust to small-magnitude adversarial examples---substantially improves the transferability of targeted attacks, even between architectures as different as convolutional neural networks and transformers. In fact, we show that these adversarial examples can transfer representation (penultimate) layer features substantially better than adversarial examples generated with non-robust networks. We argue that this result supports a non-intuitive hypothesis: slightly robust networks exhibit universal features---ones that tend to overlap with the features learned by all other networks trained on the same dataset. This suggests that the features of a single slightly-robust neural network may be useful to derive insight about the features of every non-robust neural network trained on the same distribution.
2 Replies

Loading