Evaluating Model Robustness to Patch PerturbationsDownload PDF

Published: 12 Jul 2022, Last Modified: 05 May 2023Shift Happens 2022 PosterReaders: Everyone
Keywords: Natural Patch Corruption, Adversarial Patch, Vision Transformer
TL;DR: Given the architectural traits of state-of-the-art ViTs, we study model robustness to natural patch corruption and adversarial patches.
Abstract: Recent advances in Vision Transformer (ViT) have demonstrated its impressive performance in image classification, which makes it a promising alternative to Convolutional Neural Network (CNN). Unlike CNNs, ViT represents an input image as a sequence of image patches. The patch-based input image representation makes the following question interesting: How does ViT perform when individual input image patches are perturbed with natural corruptions or adversarial perturbations, compared to CNNs? In this submission, we propose to evaluate model robustness to patch-wise perturbations. Two types of patch perturbations are considered to model robustness. One is natural corruptions, which is to test models' robustness under distributional shifts. The other is adversarial perturbations, which are created by an adversary to specifically fool a model to make a wrong prediction. The experimental results on the popular CNNs and ViTs are surprising. We find that ViTs are more robust to naturally corrupted patches than CNNs, whereas they are more vulnerable to adversarial patches. Given the architectural traits of state-of-the-art ViTs and the interesting results above, we propose to add the robustness to natural patch corruption and adversarial patch attack into the robustness benchmark.
Submission Type: Full submission (technical report + code/data)
Supplement: zip
Co Submission: No I am not submitting to the dataset and benchmark track and will complete my submission by June 3.
0 Replies

Loading