Adversarial Token Attacks on Vision TransformersDownload PDFOpen Website

2021 (modified: 18 Apr 2023)CoRR 2021Readers: Everyone
Abstract: Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks. We investigate fundamental differences between these two families of models, by designing a block sparsity based adversarial token attack. We probe and analyze transformer as well as convolutional models with token attacks of varying patch sizes. We infer that transformer models are more sensitive to token attacks than convolutional models, with ResNets outperforming Transformer models by up to $\sim30\%$ in robust accuracy for single token attacks.
0 Replies

Loading