Keywords: clip, synthetic data, multimodal learning, longtail
Abstract: Pretraining strong vision or multimodal foundation models like CLIP relies on large-scale datasets (e.g., image-text pairs) that may be noisy, potentially misaligned, and have long-tail distributions. Previous work has shown promising results in augmenting datasets by generating synthetic samples. However, they only support domain-specific ad hoc use cases (like for image or text alone) and are limited in data diversity due to a lack of fine-grained control over the synthesis process.
We design a controllable image-text synthesis pipeline called CtrlSynth to enable data-efficient multimodal learning and improve vision and multimodal models in various use cases. The key idea is to decompose the visual semantics of an image into basic elements, apply user-specified control policies (e.g. remove, add, replace operations), and recompose them to synthesize images or texts. The decompose and recompose feature in CtrlSynth allows users to control data synthesis in a fine-grained manner by defining customized control policies to manipulate the basic elements. CtrlSynth leverages the capabilities of pretrained foundation models such as large language models (LLMs) or diffusion models (DMs) to reason and recompose basic elements such that synthetic samples are natural and composed in diverse ways. CtrlSynth pipeline is training-free and has a modular design, making it easy to support different pretrained models.
CtrlSynth pipeline is also closed-loop, meaning it can synthesize text data based on the image or vice versa. Our evaluation shows that CtrlSynth samples substantially improve zero-shot classification, image-text retrieval, and compositional reasoning performance of CLIP models. We will publicly release the code and pipeline for future research.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11271
Loading