Beyond Image-Text Matching: Verb Understanding in Multimodal Transformers Using Guided Masking

Published: 01 Jan 2025, Last Modified: 19 May 2025SOFSEM (1) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Probing methods are widely used to evaluate the multimodal representations of vision-language models (VLMs), with dominant approaches relying on zero-shot performance in image-text matching tasks. These methods typically assess models on curated datasets focusing on linguistic aspects such as counting, relations, or attributes. This work uses a complementary probing strategy called guided masking. This approach selectively masks different modalities and evaluates the model’s ability to predict the masked word. We specifically focus on probing verbs, as their comprehension is crucial for understanding actions and relationships in images, and it presents a more challenging task than subjects, objects, or attributes comprehension. Our analysis targets VLMs that use region-of-interest (ROI) features obtained from object detectors as input tokens. Our experiments demonstrate that selected models can accurately predict the correct verb, challenging previous conclusions based on image-text matching methods, which suggested VLMs fail in situations requiring verb understanding. The code for experiments will be available https://github.com/ivana-13/guided_masking.
Loading