Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift

Published: 19 Jan 2024, Last Modified: 19 Jan 2024Accepted by DMLREveryoneRevisionsBibTeX
Abstract: Multimodal image-text models have shown remarkable performance in the past few years. However, evaluating robustness against distribution shifts is crucial before adopting them in real-world applications. In this work, we investigate the robustness of 12 popular open-sourced image-text models under common perturbations on five tasks (image-text retrieval, visual reasoning, visual entailment, image captioning, and text-to-image generation). In particular, we propose several new multimodal robustness benchmarks by applying 17 image perturbation and 16 text perturbation techniques on top of existing datasets. We observe that multimodal models are not robust to image and text perturbations, especially to image perturbations. Among the tested perturbation methods, character-level perturbations constitute the most severe distribution shift for text, and zoom blur is the most severe shift for image data. We also introduce two new robustness metrics (\textbf{MMI} for MultiModal Impact score and \textbf{MOR} for Missing Object Rate) for proper evaluations of multimodal models. We hope our extensive study sheds light on new directions for the development of robust multimodal models. More details can be found on the project webpage: \url{https://MMRobustness.github.io}.
Keywords: Multimodal, Robustness, Distribution Shift
Changes Since Last Submission: Submitted the camera-ready version.
Code: https://MMRobustness.github.io
Assigned Action Editor: ~Hongyang_Zhang1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 3
Loading