ADAPT to Robustify Prompt Tuning Vision Transformers

TMLR Paper3578 Authors

28 Oct 2024 (modified: 05 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The performance of deep models, including Vision Transformers, is known to be vulnerable to adversarial attacks. Many existing defenses against these attacks, such as adversarial training, rely on full-model fine-tuning to induce robustness in the models. These defenses require storing a copy of the entire model, that can have billions of parameters, for each task. At the same time, parameter-efficient prompt tuning is used to adapt large transformer- based models to downstream tasks without the need to save large copies. In this paper, we examine parameter-efficient prompt tuning of Vision Transformers for downstream tasks under the lens of robustness. We show that previous adversarial defense methods, when applied to the prompt tuning paradigm, suffer from gradient obfuscation and are vulnerable to adaptive attacks. We introduce ADAPT, a novel framework for performing adaptive adversarial training in the prompt tuning paradigm. Our method achieves competitive robust accuracy of ∼ 40% w.r.t. SOTA robustness methods using full-model fine-tuning, by tuning only ∼ 1% of the number of parameters
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Pin-Yu_Chen1
Submission Number: 3578
Loading