Instructions to reproduce the approach in the paper :

1. Download real images from FFHQ dataset (More details in the paper); Generate fake images using StyleGAN on FFHQ (https://github.com/NVlabs/stylegan).

2. Place the real and fake images in the "data/real" and "data/fake" folders respectively. And Split them into train/val/test folders.

3. Compute the mean and variance of the images by running the following command:
   python compute_mean_var.py

4. Train the baseline adversarially trainned model by running the following command:
   python train.py --model_name at

5. Generate Saliency maps by running the following command:
   python generate_masks.py

6. Train D3S4 model by running the following command:
   python train.py --model_name d3s4

7. Evaluate the trained models by running attacks using the attack_eval.py file.




***** The Auto-Attack code is pulled from https://github.com/fra31/auto-attack and modified to finetune the hyperparameters. *****