Harlequin: Color-driven Generation of Synthetic Data for Referring Expression Comprehension

Published: 09 Apr 2024, Last Modified: 22 Apr 2024SynData4CVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Synthetic Data Generation, Referring Expression Comprehension, Visual Grounding
TL;DR: Color-driven Generation of Synthetic Data for Referring Expression Comprehension
Abstract: Referring Expression Comprehension (REC) aims to identify a particular object in a scene by a natural language expression, and is an important topic in visual language understanding. State-of-the-art methods for this task are based on deep learning, which generally requires expensive and manually labeled annotations. Some works tackle the problem with limited-supervision learning or by relying on Large Vision and Language models. However, the development of techniques to synthesize labeled data is overlooked. In this paper, we propose a novel pipeline that generates artificial data for the REC task, taking into account both textual and visual modalities. The pipeline processes existing data to create variations in the annotations. Then, it generates an image using altered annotations as guidance. The result of this pipeline is a new dataset, termed Harlequin, made by more than 1M queries. This approach eliminates manual data collection and annotation, enabling scalability and facilitating arbitrary complexity. We pre-train two REC models on Harlequin, then fine-tuned and evaluated on human-annotated datasets. Our experiments show that the pre-training on artificial data is beneficial for performance.
Submission Number: 15
Loading