Abstract: With the rapid advancement of multimodal learning, pretrained Vision-Language Models (VLMs) such as CLIP have demonstrated
remarkable capacities in bridging the gap between visual and language
modalities. However, these models remain vulnerable to adversarial attacks, particularly in the image modality, presenting considerable security
risks. This paper introduces Adversarial Prompt Tuning (AdvPT),
a novel technique to enhance the adversarial robustness of image encoders in VLMs. AdvPT innovatively leverages learnable text prompts
and aligns them with adversarial image embeddings, to address the vulnerabilities inherent in VLMs without the need for extensive parameter
training or modification of the model architecture. We demonstrate that
AdvPT improves resistance against white-box and black-box adversarial
attacks and exhibits a synergistic effect when combined with existing
input denoising defense techniques, further boosting defensive capabilities.
Comprehensive experimental analyses provide insights into adversarial
prompt tuning, a novel paradigm devoted to improving resistance to adversarial images through textual input modifications, paving the way for
future robust multimodal learning research. These findings open up new
possibilities for enhancing the security of VLMs. Our code is available at
https://github.com/jiamingzhang94/Adversarial-Prompt-Tuning.
Loading