SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?

Published: 09 Apr 2024, Last Modified: 12 Apr 2024SynData4CVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: synthetic data, clip training
TL;DR: SynthCLIP trains CLIP models using synthetic text-image pairs generated by text-to-image networks and language models. Achieves performance similar to real data-trained models and introduces SynthCI-30M, a synthetic dataset with 30M captioned images.
Abstract: We present SynthCLIP, a novel framework for training CLIP models with entirely synthetic text-image pairs, significantly departing from previous methods relying on real data. Leveraging recent text-to-image (TTI) generative networks and large language models (LLM), we are able to generate synthetic datasets of images and corresponding captions at any scale, with no human intervention. With training at scale, SynthCLIP achieves performance comparable to CLIP models trained on real datasets. We also introduce SynthCI-30M, a purely synthetic dataset comprising 30 million captioned images. Our code, trained models, and generated data will be released as open source on
Submission Number: 1
Loading