Abstract: Inspired by the success of transformers in natural language processing, vision transformers have been proposed to address a wide range of computer vision tasks, such as image classification, object detection and image segmentation, and they have achieved very promising performance. However, the robustness of vision transformers has been relatively under-explored. Recent studies have revealed that pre-trained vision transformers are also vulnerable to white-box adversarial attacks on the downstream image classification task. The adversarial attacks (e.g., FGSM and PGD) designed for convolutional neural networks (CNNs) can also cause severe performance drop for vision transformers. In this paper, we evaluate the robustness of vision transformers fine-tuned with the off-the-shelf methods under adversarial attacks on CIFAR-10 and CIFAR-100. We further propose a data-augmented virtual adversarial training approach called MixVAT, which is able to enhance the robustness of pre-trained vision transformers against adversarial attacks on the downstream tasks with the unlabelled data. Extensive results on multiple datasets demonstrate the superiority of our approach over baselines on adversarial robustness, without compromising generalization ability of the model.
0 Replies
Loading