Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-Language Representation Learning

ICLR 2024 Workshop ME-FoMo Submission49 Authors

Published: 04 Mar 2024, Last Modified: 02 May 2024ME-FoMo 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: CLIP, modality gap, object bias, information imbalance, data
TL;DR: We study the modality gap as well as object bias, and find an information imbalance between images and caption texts as trigger for both.
Abstract: Contrastive vision-language models like CLIP have gained popularity for their rich representations, that are applicable in various downstream tasks. Despite their successes in some tasks, like zero-shot image recognition, they perform surprisingly poor on other tasks, like attribute detection. Previous work has attributed these challenges to the modality gap, a separation of image and text in the shared representation space, and a bias favoring objects over other factors, such as attributes. We investigate both phenomena. Specifically, we find an unintuitive correlation between the modality gap and downstream performance, with only a few embedding dimensions driving the gap. But how to determine what leads to the emergence of these phenomena? To answer this question we design a controlled setting which allows us to control the amount of shared information between the modalities. This revealed that the driving factor behind both, the modality gap and the object bias, is the information imbalance between images and captions.
Submission Number: 49
Loading