Keywords: multi-modal learning, vision-language models, compositional generalization, linguistic prior
TL;DR: We show that current improvements to vision-language compositional generalization mainly rely on linguistic priors.
Abstract: Compositionality is a common property in many modalities including natural languages and images, but the compositional generalization of multi-modal models is not well-understood. In this paper, we identify two sources of visual-linguistic compositionality: linguistic priors and the interplay between images and texts. We show that current attempts to improve compositional generalization rely on linguistic priors rather than on information in the image. We also propose a new metric for compositionality without such linguistic priors.
Submission Number: 13
Loading