Uncertainty Inclusive Contrastive Learning for Leveraging Synthetic Images

Published: 09 Apr 2024, Last Modified: 24 Apr 2024SynData4CVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: synthetic data, contrastive learning, uncertainty, generative models
Abstract: Recent advancements in text-to-image generation models have sparked a growing interest in using synthesized training data to improve few-shot learning performance. However, prevailing approaches treat all generated data as uniformly important, neglecting the fact that the quality of generated images varies across different domains and datasets. This can hurt learning performance. In this work, we present Uncertain-inclusive Contrastive Learning (UniCon), a novel contrastive loss function that incorporates uncertainty weights for synthetic images during learning. Extending the framework of supervised contrastive learning, we add a learned hyperparameter that weights the synthetic input images per class, adjusting the influence of synthetic images during the training process. We evaluate the effectiveness of the UniCon-learned representations against traditional supervised contrastive learning, both with and without synthetic images. Across three different fine-grained classification datasets, we find that the learned representation space generated by the UniCon loss function incorporating synthetic data leads to significantly improved downstream classification performance in comparison to supervised contrastive learning baselines.
Supplementary Material: pdf
Submission Number: 54