# DiffAug: Diffuse and Denoise Augmentation

This repository heavily builds upon [AugMix](https://github.com/google-research/augmix/tree/master) and [Guided Diffusion](https://github.com/openai/guided-diffusion/). 

Download the pretrained 256x256 unconditional diffusion model inside a directory called ```workdirs```. Additionally, download Imagenet-C, Imagenet-R, Imagenet-S and the DeepAugment images.

The following files are intended for diffaug finetuning/training:
* diffaugmix_firsthalf.py
* diffaugmix_secondhalf.py
* diffaugmix.py
* diffbase.py
* diffda_augmix.py
* diffda.py

We run them using torchrun as follows:
>```torchrun --standalone --nproc_per_node 4 {file_name} --clean_data {path/to/imagenet}```

For DeIT-III, we reused all their code and added an additional line to also optimize the training loss on DiffAug samples.

The following files generate the evaluation checkpoint using Default and DE modes:
* eval_c_diffdenoise.py
* eval_imagenetars_diffdenoise.py
You will need to configure the paths to the evaluation datasets as appropriate. 

We used the code provided by DDA, DDS and OpenOOD for our other evaluations of these trained classifiers. 
