Bounding Box-Guided Diffusion for Synthesizing Industrial Images and Segmentation Maps

Published: 06 May 2025, Last Modified: 09 May 2025SynData4CVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: synthetic data, diffusion models, semantic segmentation, industrial data
TL;DR: We use diffusion models with enriched bounding boxes to generate realistic industrial defect datasets, improving segmentation accuracy while reducing labeling costs.
Abstract: Synthetic dataset generation in Computer Vision, particularly for industrial applications, is still underexplored. Industrial defect segmentation, for instance, requires highly accurate labels, yet acquiring such data is costly and time-consuming. To address this challenge, we propose a novel diffusion-based pipeline for generating high-fidelity industrial datasets with minimal supervision. Our approach conditions the diffusion model on enriched bounding box representations to produce precise segmentation masks, ensuring realistic and accurately localized defect synthesis. Compared to existing layout-conditioned generative methods, our approach improves defect consistency and spatial accuracy. We introduce two quantitative metrics to evaluate the effectiveness of our method and assess its impact on a downstream segmentation task trained on real and synthetic data. Our results demonstrate that diffusion-based synthesis can bridge the gap between artificial and real-world industrial data, fostering more reliable and cost-efficient segmentation models. The code is publicly available at \url{https://github.com/covisionlab/diffusion_labeling}.
Submission Number: 3
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview