Is Synthetic Data all We Need? Benchmarking the Robustness of Models Trained with Synthetic Images

CVPR 2024 Workshop SyntaGen Submission18 Authors

Published: 07 Apr 2024, Last Modified: 14 Apr 2024SyntaGen 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: synthetic models, robustness, synthetic data, benchmarking
TL;DR: Our work conducts a comprehensive benchmarking study comparing the robustness of models trained with data generated from diffusion models to models trained with real data.
Abstract: A long-standing challenge in developing machine learning approaches has been the lack of high-quality labeled data. Recently, models trained with purely synthetic data, here termed synthetic clones, generated using large-scale pre-trained diffusion models have shown promising results in overcoming this annotation bottleneck. As these synthetic clone models progress, they are likely to be deployed in challenging real-world settings, yet their suitability remains understudied. Our work addresses this gap by providing the first benchmark for three classes of synthetic clone models, namely supervised, self-supervised, and multi-modal ones, across a range of robustness measures. We show that existing synthetic self-supervised and multi-modal clones are comparable to or outperform state-of-the-art real-image baselines for a range of robustness metrics -- shape bias, background bias, calibration, etc. However, we also find that synthetic clones are much more susceptible to adversarial and real-world noise than models trained with real data. To address this, we find that combining both real and synthetic data further increases the robustness, and that the choice of prompt used for generating synthetic images plays an important part in the robustness of synthetic clones.
Submission Number: 18
Loading