TRAINING:
CIFAR10_train.py and MNIST_train.py allow one to train the various baselines and REx from scratch. WARNING: it is computationally expensive.
Two arguments are required: --REx=True (or False) and --seen_domains=MSD (or PGDs or std or a particular attack). For example, to train the MSD baseline without REx:
python CIFAR10_train.py --REx=False --seen_domains=MSD

Log files and checkpoints will be saved in experiments/<dataset>/<architecture>/train/
Move final checkpoint to results/<dataset>/models and, for any result except for MNIST, use print_results_to_csv.py with --data=<dataset> arg for a pretty print and an export to a CSV file of the results of all models in the path.





TESTING:
MNIST_test_perf and CIFAR10_test_perf Notebooks:
Testing for MNIST and CIFAR10. First cell allows you to load automatically final models from "./results/<dataset>/models/" and saves computed test set metrics in 
"./results/<dataset>/test_accs/" a dictionary object of the results with the same name as the model filename.

CIFAR10_generate_and_discriminate_perturbed_dataset.ipynb:
1st cell: generate the CIFAR10 adversarially perturbed dataset (tested only with the torch 1.8.1 env) which is quite time consuming in the first cell. First cell can easily be turned into a .py file removing the get_ipython().magic('reset -sf')  call. WARNING:
requires a (single) base model in "results/CIFAR10/models" to attack. We use the PGD L_inf base model.
Subsequent cells: train/evaluate attack discriminator on selected CIFAR classes and attacks (uses torch >= 1.13).

CIFAR10_transferability.ipynb:
Last cell: evaluate all defenses in "results/CIFAR10/models" on CIFAR-10-C.
First cells: fine-tune (i.e. freeze all weights except last linear layer's which is reset then train and evaluate) all models in "results/CIFAR10/models" on SVHN and CIFAR100. The code will automatically do it for both datasets at the same time since it is cheap. WARNING: you may want to reduce the number of repeats to 1 since you may not need to run statistics over multiple training iterations of the same pretrained model.







REQS:
requirements.txt contains the versions of the various libraries used. Note: Important ones are advertorch, autoattack, torch 1.8.1+cu111, torchvision 0.9.1

requirements_vit.txt contains the reqs to run the cells of CIFAR10_generate_and_discriminate_perturbed_dataset.ipynb. Important
libs are torch >= 1.13 and associated version of torchvision, since using a ViT in torchvision requires a more recent version 
of pytorch.