ScribbleGen: Generative Data Augmentation Improves Scribble-supervised Semantic Segmentation

CVPR 2024 Workshop SyntaGen Submission16 Authors

Published: 07 Apr 2024, Last Modified: 15 Apr 2024SyntaGen 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: synthetic data, weakly-supervised semantic segmentation, generative model
TL;DR: We explore generative data augmentations for scribble-supervised semantic segmentation and show encouraging results.
Abstract: Recent advances in generative models, such as diffusion models, have made generating high-quality synthetic images widely accessible. Prior works have shown that training on synthetic images improves many perception tasks, such as image classification, object detection, and semantic segmentation. We are the first to explore generative data augmentations for scribble-supervised semantic segmentation. We propose ScribbleGen, a generative data augmentation method that leverages a ControlNet diffusion model conditioned on semantic scribbles to produce high-quality training data. However, naive implementations of generative data augmentations may inadvertently harm the performance of the downstream segmentor rather than improve it. We leverage classifier-free diffusion guidance to enforce class consistency and introduce encode ratios to trade off data diversity for data realism. Using the guidance scale and encode ratio, we can generate a spectrum of high-quality training images. We propose multiple augmentation schemes and find that these schemes significantly impact model performance, especially in the low-data regime. Our framework further reduces the gap between the performance of scribble-supervised segmentation and that of fully-supervised segmentation. We also show that our framework significantly improves segmentation performance on small datasets, even surpassing fully-supervised segmentation. The code is available at https: //github.com/mengtang-lab/scribblegen.
Supplementary Material: pdf
Submission Number: 16
Loading