This repo contains some code of the ICLR2022 submission ''Poisoning Attacks Are Shortcuts.''.

This code is tested with python3 and PyTorch 1.8. 

Please first install the following packages:

    scikit-learn, torch, numpy

Here are some example commands to reproduce our experimental results.

To test synthetic noises on CIFAR-10 and ResNet18 without data augmentation:

    CUDA_VISIBLE_DEVICES=0 python cifar_train.py --model resnet18 --dataset c10

To test synthetic noises on CIFAR-10 and ResNet18:

    CUDA_VISIBLE_DEVICES=0 python cifar_train.py --model resnet18 --dataset c10 --aug

You can also change the model or dataset:

    CUDA_VISIBLE_DEVICES=0 python cifar_train.py --model densenet --dataset c100 --aug

Add '--clean' flag to train the model on clean data:

    CUDA_VISIBLE_DEVICES=0 python cifar_train.py --model resnet18 --dataset c10 --aug --clean
