Training Robust Classifiers with Diffusion Denoised Examples

Published: 09 Apr 2024, Last Modified: 21 Apr 2024SynData4CVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: denoising diffusion models; ddpm; synthetic images; robustness;
TL;DR: Diffusion denoised images are helpful train-time and test-time augmentations to achieve robust classification.
Abstract: In this paper, we explore diffusion denoised examples as augmentations to train image classifiers. In particular, we diffuse the train examples to a randomly sampled diffusion time (i.e., apply Gaussian perturbation) and then apply a single diffusion denoising step to generate an augmented train example. We provide an analysis of training classifiers with such diffusion denoised examples through comparisons with classifiers trained exclusively with (i) standard augmentations such as horizontal flips and crops and (ii) novel augmentations such as AugMix and DeepAugment. We show that classifiers trained with diffusion denoised examples are more robust than the classifiers trained using standard augmentations without sacrificing clean test accuracy. Furthermore, we demonstrate that diffusion-denoised augmentations are also useful as test-time augmentations and this allows us to introduce a simple and efficient image-adaptation method that is competitive with DDA.
Submission Number: 55
Loading