Towards Adversarially Robust Vision-Language Models: Insights from Design Choices and Prompt Formatting Techniques

Published: 19 Jun 2024, Last Modified: 09 Jul 2024ICML 2024 TiFA WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Robustness, Vision-Language Models, Prompt Engineering
Abstract: Vision-Language Models (VLMs) have witnessed a surge in both research and real-world applications. However, as they becoming increasingly prevalent, ensuring their robustness against adversarial attacks is paramount. This work systematically investigates the impact of model design choices on the adversarial robustness of VLMs against image-based attacks. Additionally, we introduce novel, cost-effective approaches to enhance robustness through prompt formatting. By rephrasing questions and suggesting potential adversarial perturbations, we demonstrate substantial improvements in model robustness against strong image-based attacks such as Auto-PGD. Our findings provide important guidelines for developing more robust VLMs, particularly for deployment in safety-critical environments.
Submission Number: 27
Loading