Coordinated Robustness Evaluation Framework for Vision Language Models

Published: 10 Oct 2024, Last Modified: 04 Dec 2024NeurIPS 2024 Workshop RBFM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Trustworthy AI, Evaluation, Vision Language Models, Foundation Model
TL;DR: A coordinated attack on vision and language modality for robustness evalaution of foundational models.
Abstract: Vision-language models, which integrate computer vision and natural language processing capabilities, have demonstrated significant advancements in tasks such as image captioning and visual question answering. However, similar to traditional models, they are susceptible to small perturbations, posing a challenge to their robustness, particularly in deployment scenarios. Evaluating the robustness of these models requires perturbations in both the vision and language modalities to learn their inter-modal dependencies. In this work, we train a generic surrogate model that can take both image and text as input and generate joint representation which is further used to generate adversarial perturbations for both the text and image modalities. This coordinated attack strategy is evaluated on the visual question answering and visual reasoning datasets using various state-of-the-art vision-language models. Our results indicate that the proposed strategy outperforms other multi-modal attacks and single-modality attacks from the recent literature. Our results demonstrate their effectiveness in compromising the robustness of several state-of-the-art pre-trained multi-modal models such as instruct-BLIP, ViLT and others.
Submission Number: 4
Loading