Data Curation for Image Captioning with Text-to-Image Generative Models

10 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: Vision-language learning, Image captioning, Data curation, Text-to-image generation, Stable Diffusion
TL;DR: Better image captioning models can be trained by curating existing datasets and incorporating images synthesized by Text-to-Image models.
Abstract: Recent advances in image captioning are driven by increasingly larger-scale vision--language pretraining, relying on massive computational resources and increasingly large datasets. Instead of solely focusing on scaling pretraining, we ask whether it is possible to improve performance by improving the quality of the samples in existing datasets. We pursue this question through two approaches to data curation: one that assumes that some examples should be avoided due to mismatches between the image and caption, and one that assumes that the mismatch can be addressed by replacing the image, for which we use the state-of-the-art Stable Diffusion model. These approaches are evaluated using the BLIP model on the COCO and Flickr30K datasets. Models trained with our data curation approaches consistently outperform their baselines, indicating that better image captioning models can be trained by curating existing resources. Finally, we conduct a human study to understand the errors made by the Stable Diffusion model and highlight directions for future work in text-to-image generation.
Supplementary Material: zip
Submission Number: 6633
Loading