Abstract: Adversarial examples (AEs) with small adversarial perturbations can mislead deep neural networks (DNNs) into wrong predictions. The AEs created on one DNN can also fool other networks. Over the last few years, the transferability of AEs has garnered significant attention as it is a crucial property for facilitating black-box attacks. Many approaches have been proposed to improve it and transferability of adversarial attacks across Convolutional Neural Networks (CNNs) is remarkably high, as attested by previous research. However, such evaluation methods are not reliable since all CNNs share some similar architectural biases. In this work, we re-evaluate 13 representative transferability-enhancing attack methods where we test on 18 popular models from 4 types of neural networks. Contrary to the prevailing belief, our reevaluation revealed that the adversarial transferability across these diverse network types is notably diminished, and there is no single AE that can be transferred to all popular models. The transferability rank of previous attacking methods changes when under our comprehensive evaluation. Based on our analysis, we propose a reliable benchmark including three evaluation protocols. We release our benchmark to facilitate future research, which includes code, model checkpoints, and evaluation protocols.
External IDs:dblp:conf/satml/YuGL025
Loading