Unsupervised Vision-Language Grammar Induction with Shared Structure ModelingDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 OralReaders: Everyone
Keywords: Grammar Induction, Vision-Language Matching, Unsupervised Learning
Abstract: We introduce a new task, unsupervised vision-language (VL) grammar induction. Given an image-caption pair, the goal is to extract a shared hierarchical structure for both image and language simultaneously. We argue that such structured output, grounded in both modalities, is a clear step towards the high-level understanding of multimodal information. Besides challenges existing in conventional visually grounded grammar induction tasks, VL grammar induction requires a model to capture contextual semantics and perform a fine-grained alignment. To address these challenges, we propose a novel method, CLIORA, which constructs a shared vision-language constituency tree structure with context-dependent semantics for all possible phrases in different levels of the tree. It computes a matching score between each constituent and image region, trained via contrastive learning. It integrates two levels of fusion, namely at feature-level and at score-level, so as to allow fine-grained alignment. We introduce a new evaluation metric for VL grammar induction, CCRA, and show a 3.3% improvement over a strong baseline on Flickr30k Entities. We also evaluate our model via two derived tasks, i.e., language grammar induction and phrase grounding, and improve over the state-of-the-art for both.
One-sentence Summary: We introduce a new unsupervised vision-language grammar induction task to explore the multimodal information and induce a shared hierarchical structure for both image and language simultaneously.
Supplementary Material: zip
14 Replies

Loading