Keywords: Backdoor Attacks, Vision Transformer
TL;DR: In this paper, we investigate how to adapt existing defenses to ViTs and propose a new attacks, named CAT for more reliable evaluations.
Abstract: Backdoor attacks, which make Convolution Neural Networks (CNNs) exhibit specific behaviors in the presence of a predefined trigger, bring risks to the usage of CNNs. These threats should be also considered on Vision Transformers. However, previous studies found that the existing backdoor attacks are powerful enough in ViTs to bypass common backdoor defenses, \textit{i.e.}, these defenses either fail to reduce the attack success rate or cause a significant accuracy drop. In this paper, we first investigate this phenomenon and find that this kind of achievement is over-optimistic, caused by the inappropriate adaptation of defenses from CNNs to ViTs. Existing backdoor attacks can still be easily defended against with proper inheritance. Furthermore, we propose a more reliable attack: adding a small perturbation on the trigger is enough to help existing attacks more persistent against various defenses. We hope our contributions, including the finding that existing attacks are still easy to defend with adaptations and the new backdoor attack, will promote more in-depth research into the backdoor robustness of ViTs.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 19406
Loading