Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models

Published: 01 Jan 2024, Last Modified: 01 Oct 2024EACL (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Despite the impressive advancements achieved through vision-and-language pretraining, it remains unclear whether multi-modal learning can help understand each individual modality. In this work, we conduct a comparative analysis of the visual representations in existing vision-and-language models and vision-only models by probing on a broad range of tasks. Five probing tasks are evaluated in order to assess the quality of the learned representations in a nuanced manner. Our results on five probing tasks suggest vision-and-language models are better at label prediction tasks like object and attribute prediction, while vision-only models are stronger at dense prediction tasks that require more localized information. We hope our study sheds light on the role of language in visual learning, and serves as an empirical guide for various pretrained models.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview