Metadata-Version: 2.1
Name: ats
Version: 0.0.1
Summary: Advertising Transferability Study
Home-page: https://github.com/osoblanco/AdversarialTransfer
Author: Aram Markosyan, Davit Papikyan, Erik Arakelyan
Author-email: 
License: UNKNOWN
Platform: UNKNOWN
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.8
Description-Content-Type: text/markdown

# Adversarial Transferability Study

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pLhMBcO-itsFyhYKH6dzWgoPKAQP8ULi?usp=sharing#scrollTo=fp_66ZthMRfm)


The repository includes the codebase and results for the paper [Adversarial Transferability Study]().

## Abstract

Adversarial transferability has been widely explored within machine vision research under white and black-box settings. Composing misleading examples and exploring the adversarial space within various architectures has commonly led to more robust training and generalization within vision systems. In this work, we aim to explore adversarial transferability within self-supervised learning frameworks that share a common pretrained backbone. We will operate under the gray-box attack assumption. We will explore the effects of transferring adversarial features within such models across various tasks, architectures, fine-tuning regimes and datasets. 

## Repository Structure

The repository is structued as a standalone package. The package is structured as follows:

```
.
├── README.md
├── ats
│   ├── __init__.py
│   ├── data
│   │   ├── __init__.py
│   │   └── transforms
│   │       ├── __init__.py
│   │       ├── base.py
│   │       ├── cifar10.py (This type of transforms a maybe ?)
│   │       └── utils.py
│   ├── models
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── classification
│   │   │   ├── __init__.py
│   │   ├── self_supervised
│   │   │   ├── __init__.py
|── tests
|── notebooks
│   ├── prototype_notebook.ipynb
|── setup.py
```

## Installation

The package can be installed by running the following command:

### VISSL Installation

The repository uses [VISSL]() for self-supervised learning. The repository is tested on `Python 3.8`. The repository can be installed by running the following commands:

```bash
# Create environment via conda and install vissl
conda create -n env_ats python=3.8
conda activate env_ats
```

After activating the environment, the repository can be installed by running the following commands:

```bash
#Note that the cuda and torch version can be changed in accordance to you own setup (min torch version is 1.7.1)
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html
pip install -f https://dl.fbaipublicfiles.com/vissl/packaging/apexwheels/py38_cu102_pyt181/download.html apex
pip install opencv-python
pip uninstall -y classy_vision
pip install classy-vision@https://github.com/facebookresearch/ClassyVision/tarball/4785d5ee19d3bcedd5b28c1eb51ea1f59188b54d
pip install vissl
pip install -U matplotlib
pip install -r requirements.txt

# Create dataset registery used by VISSL.
touch dataset_catalog.json
```


```bash
pip install -e .
```

## Configuration guide (WIP)

We configure the `vissl` installation by creating a `config` directory and a `dataset_catalog.json` file. The `dataset_catalog.json` file is used to store the paths to the datasets. The `config` directory is used to store the configuration files for the experiments. The following commands will create the necessary directories and files.

```bash
mkdir -p ./configs/config/ && touch ./configs/__init__.py ./configs/config/dataset_catalog.json 
pip install -r requirements.txt
```

Some examples of downloading the necessary data, models and configs: 
```bash
# Downloading datasets.
python ./ats/data/get_cifar10.py  # Cifar-10.

# Downloading models:
#   usage: ./vissl_tools/download_model <URL to model checkpoint>
./vissl_tools/helpers/download_model.sh https://dl.fbaipublicfiles.com/vissl/model_zoo/sup_rn50_in1k_ep105_supervised_8gpu_resnet_17_07_20.733dbdee/model_final_checkpoint_phase208.torch

# Downloading config files:
#   usage:  ./vissl_tools/helpers/download_config.sh <URL to yaml file>
./vissl_tools/helpers/download_config.sh https://raw.githubusercontent.com/facebookresearch/vissl/main/configs/config/benchmark/linear_image_classification/cifar10/eval_resnet_8gpu_transfer_cifar10_linear.yaml
```

## Training and Processing with Custom Datasets

VISSL supports training and processing with custom datasets (in none VISSL or torch_vision formats). The following steps are required to train and process with custom datasets:

After downloading the dataset, there are several [extra-scripts from VISSL](https://github.com/facebookresearch/vissl/blob/main/extra_scripts/README.md) (which for convience were ported to `ats/data/extra_scripts`) that can be used to convert the dataset to the VISSL format. The following commands will complete the conversion process:


```bash 
# example on Oxford-IIIT Pets dataset. oxford-iiit-pet was downloaded using get_pets.py
python ats/data/extra_scripts/create_oxford_pets_data_files.py -i ats/data/datasets/oxford-iiit-pet/ -o ats/data/datasets/Oxford_Pet -d
```

Update the `dataset_catalog.json` file with the path to the dataset. The following is an example of the `dataset_catalog.json` file:

```json
{
    "Oxford_Pet": {
        "train": ["/home/erik/Documents/UCPH/AdversarialTransfer/ats/data/datasets/Oxford_Pet/train", "<ignored>"],
        "val": ["/home/erik/Documents/UCPH/AdversarialTransfer/ats/data/datasets/Oxford_Pet/test", "<ignored>"]
    }
}
```

When training with custom datasets, the data sources must be updated in the command line arguments (essentially updateing the congig). The following is an example of the cmd line arguments:

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        +config/models/Oxford_pets/deit=dino_deit_s16 \
        config.DATA.TRAIN.DATA_SOURCES="[disk_folder]" \
        config.DATA.TRAIN.LABEL_SOURCES="[disk_folder]" \
        config.DATA.TRAIN.DATASET_NAMES="[Oxford_Pet]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/Oxford_Pet]" \
        config.DATA.TEST.DATA_SOURCES="[disk_folder]" \
        config.DATA.TEST.LABEL_SOURCES="[disk_folder]" \
        config.DATA.TEST.DATASET_NAMES="[Oxford_Pet]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/Oxford_Pet]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_finetune_dino_deit_s16_OxfordIIITPet_new" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/dino_300ep_deitsmall16/model_final_checkpoint_phase299.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk.base_model._features" \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False
```

Main difference with the standard fine-tuning is the following:

```bash
config.DATA.TRAIN.DATA_SOURCES="[disk_folder]"         \
config.DATA.TRAIN.LABEL_SOURCES="[disk_folder]"         \
config.DATA.TRAIN.DATASET_NAMES="[Oxford_Pet]"       \  
config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/Oxford_Pet]"   \      
config.DATA.TEST.DATA_SOURCES="[disk_folder]"       \  
config.DATA.TEST.LABEL_SOURCES="[disk_folder]"       \  
config.DATA.TEST.DATASET_NAMES="[Oxford_Pet]"       \ 
config.DATA.TEST.DATA_PATHS="[ats/data/datasets/Oxford_Pet]"   \     
```
 Here we simply replace the source from torchvision to disk_folder. The `DATA_PATHS` is the path to the dataset.

## Experiments

### Sanity-check (MNIST → MNIST)

<details>
  <summary>Step 1. Set-up</summary>

```bash
# Downloading RN50 - Torchvision (Supervised)
./vissl_tools/helpers/download_model.sh https://download.pytorch.org/models/resnet50-19c8e357.pth

# Downloading MNIST.
python ./ats/data/get_mnist.py

# Downloading the corresponding config file.
./vissl_tools/helpers/download_config.sh https://raw.githubusercontent.com/facebookresearch/vissl/main/configs/config/benchmark/linear_image_classification/mnist/eval_resnet_8gpu_transfer_mnist_linear.yaml
```
</details>

<details>
  <summary>Step 2. Fine-tuning models.</summary>

TODO: Add epochs, learning rate, learning rate scheduler (with milestones), optimization algorithm, regularzation (dropout, batchnorm) to step 2 training CLI.

```bash
# Fine-tuning only the attached classification head keeping backbone frozen.
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_mnist_linear.yaml \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[MNIST]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[MNIST]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.OPTIMIZER.num_epochs=1 \
        config.OPTIMIZER.param_schedulers.lr.values="[0.01]" \
        config.OPTIMIZER.param_schedulers.lr.milestones="[]" \
        config.CHECKPOINT.DIR="ats/tuned_models/model_target_finetune_MNIST" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/resnet50-19c8e357.pth" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk.base_model._feature_blocks." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.MODEL.HEAD.PARAMS="""[["eval_mlp", {"in_channels": 2048, "dims": [2048, 10]}]]""" \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False

# Full-tuning the whole model (including backbone).
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_mnist_linear.yaml \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[MNIST]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[MNIST]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.OPTIMIZER.num_epochs=1 \
        config.OPTIMIZER.param_schedulers.lr.values="[0.01]" \
        config.OPTIMIZER.param_schedulers.lr.milestones="[]" \
        config.CHECKPOINT.DIR="ats/tuned_models/model_target_fulltune_MNIST" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/resnet50-19c8e357.pth" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk._feature_blocks." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.MODEL.HEAD.PARAMS="""[["eval_mlp", {"in_channels": 2048, "dims": [2048, 10]}]]""" \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>

<details>
  <summary>Step 3. Sanity check for 2 identical release sets.</summary>

Use checkpoints from the previous step to initialize the first and the second models. 
```bash
python ats/compute_tranferability.py --model_proxy=model_1_finetune_MNIST/model_final_checkpoint_phase0.torch --model_target=model_2_finetune_MNIST/model_final_checkpoint_phase0.torch --data_name=mnist --model_name=resnet50 --attack_type=fgsm --experiment_group=sanity_check --experiment_name=rn50_full_to_rn50_full --mode_eval_proxy
```

If any of the models is **fulltuned**, then use the corresponding flag. This is done for VISSL to load the model correctly.
```bash
python ats/compute_tranferability.py --model_proxy=model_1_finetune_MNIST/model_final_checkpoint_phase0.torch --model_proxy_is_fulltuned --model_target=model_2_finetune_MNIST/model_final_checkpoint_phase0.torch --model_target_is_fulltuned --data_name=mnist --model_name=resnet50 --attack_type=fgsm --experiment_group=sanity_check --experiment_name=rn50_full_to_rn50_full --mode_eval_proxy
```

</details>

***

## How to fine/full tune a VISSL pretrained backbone on any dataset
1) First of all make sure that the **DATASET** is listed in `./ats/data/datasets` directory. Otherwise, run the corresponding script in `./ats/data`.
2) Choose a backbone you wish to fine/full tune from [VISSL MODEL ZOO](https://github.com/facebookresearch/vissl/blob/main/MODEL_ZOO.md) and copy its url listed under the column URL. Download backbone by running the following command after inserting the corresponding url: `./vissl_tools/helpers/download_model.sh <BACKBONE-URL>`. After this step the checkpoint file will be saved under `ats/models` directory.
3) Select the corresponding dataset (step 1) folder from [here](https://github.com/facebookresearch/vissl/tree/main/configs/config/benchmark/linear_image_classification), go into that directory and download `eval_resnet_8gpu_transfer_<dataset>_linear.yaml` listed there by running `./vissl_tools/helpers/download_config.sh <CONFIG-URL>`. 
4) Meanwhile, make sure that your model is listed under `models` direcotry. Pick the `<MODEL-NAME>.yaml` config file and download it with:
    `wget -q -O ./configs/config/models/<dataset>/<MODEL-NAME>.yaml <MODEL-NAME>.yaml`
5) Now that preparatory steps are completed, we are ready to fine/full-tune the bacbkone. Fill in the following variables **DATASET** (step 1), **BACKBONE-CHECKPOINT** (step 2), **CONFIG-URL** (step 3) and **MODEL-NAME** (step 4) in the below scripts and run: 
<details>
  <summary>5.1 Fine-tuning</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_<dataset>_linear.yaml \
        +config/models/<dataset>=<MODEL-NAME> \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[<DATASET>]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[<DATASET>]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/<target|proxy>_<finetune|fulltune>_<MODEL-NAME>_<DATASET>" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/<BACKBONE-CHECKPOINT>" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk.base_model." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>

<details>
  <summary>5.2 Full-tuning</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_<dataset>_linear.yaml \
        +config/models/<dataset>=<MODEL-NAME> \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[<DATASET>]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[<DATASET>]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/<target|proxy>_<finetune|fulltune>_<MODEL-NAME>_<DATASET>" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/<BACKBONE-CHECKPOINT>" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>


### An example of fine/full-tuning ***DINO ViT-S/16 - 300 epochs - 1024 batch-size*** on CIFAR10
<details>
    <summary>Preparation (steps 1-4)</summary>

```bash
# 1) Download dataset:
python ats/data/get_cifar10.py

# 2) Download backbone:
./vissl_tools/helpers/download_model.sh https://dl.fbaipublicfiles.com/vissl/model_zoo/dino_300ep_deitsmall16/model_final_checkpoint_phase299.torch

# 3) Download training config:
./vissl_tools/helpers/download_config.sh https://github.com/facebookresearch/vissl/blob/main/configs/config/benchmark/linear_image_classification/cifar10/eval_resnet_8gpu_transfer_cifar10_linear.yaml

# 4) Download model architecture config:
wget -q -O ./configs/config/models/cifar10/dino_deit_s16.yaml https://raw.githubusercontent.com/facebookresearch/vissl/main/configs/config/benchmark/linear_image_classification/cifar10/models/dino_deit_s16.yaml
```
</details>

<details>
  <summary>Fine-tuning (step 5.1)</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        +config/models/cifar10=dino_deit_s16 \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_finetune_dino_deit_s16_CIFAR10" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/model_final_checkpoint_phase299.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk.base_model." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>

<details>
  <summary>Full-tuning (step 5.2)</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        +config/models/cifar10=dino_deit_s16 \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_fulltune_dino_deit_s16_CIFAR10" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/model_final_checkpoint_phase299.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>


### An example of fine/full-tuning ***DINO XCiT-S/16 - 300 epochs - 1024 batch-size*** on CIFAR10
<details>
    <summary>Preparation (steps 1-4)</summary>

```bash
# 1) Download dataset:
python ats/data/get_cifar10.py

# 2) Download backbone:
./vissl_tools/helpers/download_model.sh https://dl.fbaipublicfiles.com/vissl/model_zoo/dino_300ep_xcitsmall16/model_phase250.torch

# 3) Download training config:
./vissl_tools/helpers/download_config.sh https://github.com/facebookresearch/vissl/blob/main/configs/config/benchmark/linear_image_classification/cifar10/eval_resnet_8gpu_transfer_cifar10_linear.yaml

# 4) Download model architecture config (since VISSL doesn't provide model config for CIFAR10 we download it for ImageNet1k and change the clssification head):
wget -q -O ./configs/config/models/cifar10/dino_xcit_s16.yaml https://raw.githubusercontent.com/facebookresearch/vissl/main/configs/config/benchmark/linear_image_classification/imagenet1k/models/dino_xcit_s16.yaml
```
</details>

<details>
  <summary>Fine-tuning (step 5.1)</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        +config/models/cifar10=dino_xcit_s16 \
        config.MODEL.FEATURE_EVAL_SETTINGS.LINEAR_EVAL_FEAT_POOL_OPS_MAP="""[["lastCLS",["Identity",[]]]]""" \
        config.MODEL.HEAD.PARAMS="""[["eval_mlp", {"in_channels": 384, "dims": [384, 10]}]]""" \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_finetune_dino_xcit_s16_CIFAR10" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/model_phase250.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk.base_model." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>

<details>
  <summary>Full-tuning (step 5.2)</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        +config/models/cifar10=dino_xcit_s16 \
        config.MODEL.FEATURE_EVAL_SETTINGS.LINEAR_EVAL_FEAT_POOL_OPS_MAP="""[["lastCLS",["Identity",[]]]]""" \
        config.MODEL.HEAD.PARAMS="""[["eval_mlp", {"in_channels": 384, "dims": [384, 10]}]]""" \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_fulltune_dino_xcit_s16_CIFAR10" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/model_phase250.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>


### An example of fine/full-tuning ***Jigsaw RN50 - 10K permutations*** on CIFAR10
<details>
    <summary>Preparation (steps 1-4)</summary>

```bash
# 1) Download dataset:
python ats/data/get_cifar10.py

# 2) Download backbone:
./vissl_tools/helpers/download_model.sh https://dl.fbaipublicfiles.com/vissl/model_zoo/jigsaw_rn50_in1k_ep105_perm2k_jigsaw_8gpu_resnet_20_07_20.3d706467/model_final_checkpoint_phase104.torch

# 3) Download training config:
./vissl_tools/helpers/download_config.sh https://github.com/facebookresearch/vissl/blob/main/configs/config/benchmark/linear_image_classification/cifar10/eval_resnet_8gpu_transfer_cifar10_linear.yaml

# 4) Skip this step as the downloaded config file in the previous step is designed for ResNet50 architecture.
```
</details>

<details>
  <summary>Fine-tuning (step 5.1)</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_finetune_jigsaw_rn50_10K_perm_CIFAR10" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/model_final_checkpoint_phase104.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk.base_model." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>

<details>
  <summary>Full-tuning (step 5.2)</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_fulltune_jigsaw_rn50_10K_perm_CIFAR10" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/model_final_checkpoint_phase104.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="classy_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>


### An example of fine/full-tuning ***AlexNet - Goyal'19 (ImageNet-1K)*** on CIFAR10
<details>
    <summary>Preparation (steps 1-4)</summary>

```bash
# 1) Download dataset:
python ats/data/get_cifar10.py

# 2) Download backbone:
./vissl_tools/helpers/download_model.sh https://dl.fbaipublicfiles.com/vissl/model_zoo/converted_alexnet_jigsaw_in1k_pretext.torch
mv ./ats/models/converted_alexnet_jigsaw_in1k_pretext.torch dummy_name

# 3) Download training config:
./vissl_tools/helpers/download_config.sh https://github.com/facebookresearch/vissl/blob/main/configs/config/benchmark/linear_image_classification/cifar10/eval_resnet_8gpu_transfer_cifar10_linear.yaml

# 4) 
wget -q -O ./configs/config/models/cifar10/alexnet_jigsaw.yaml https://raw.githubusercontent.com/facebookresearch/vissl/main/configs/config/benchmark/linear_image_classification/imagenet1k/models/alexnet_jigsaw.yaml
```
</details>

<details>
  <summary>Fine-tuning (step 5.1)</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        +config/models/cifar10=alexnet_jigsaw \
        config.MODEL.HEAD.PARAMS="""[["eval_mlp",{"in_channels":96,"dims":[9600,10]}],["eval_mlp",{"in_channels":256,"dims":[9216,10]}],["eval_mlp",{"in_channels":384,"dims":[9600,10]}],["eval_mlp",{"in_channels":384,"dims":[9600,10]}],["eval_mlp",{"in_channels":256,"dims":[9216,10]}]]""" \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_finetune_alexnet_goyal19_imagenet1k_CIFAR10" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/converted_alexnet_jigsaw_in1k_pretext/converted_alexnet_jigsaw_in1k_pretext.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="model_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk.base_model." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>

<details>
  <summary>Full-tuning (step 5.2) [IS NOT WORKING BECAUSE OF OUTPUT SHAPE ERRORS]</summary>

```bash
python vissl_tools/run_distributed_engines.py \
        hydra.verbose=true \
        config=eval_resnet_8gpu_transfer_cifar10_linear.yaml \
        +config/models/cifar10=alexnet_jigsaw \
        config.MODEL.HEAD.PARAMS="""[["eval_mlp",{"in_channels":96,"dims":[9600,10]}],["eval_mlp",{"in_channels":256,"dims":[9216,10]}],["eval_mlp",{"in_channels":384,"dims":[9600,10]}],["eval_mlp",{"in_channels":384,"dims":[9600,10]}],["eval_mlp",{"in_channels":256,"dims":[9216,10]}]]""" \
        config.DATA.TRAIN.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TRAIN.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TRAIN.DATA_PATHS="[ats/data/datasets/]" \
        config.DATA.TEST.DATA_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.LABEL_SOURCES="[torchvision_dataset]" \
        config.DATA.TEST.DATASET_NAMES="[CIFAR10]" \
        config.DATA.TEST.DATA_PATHS="[ats/data/datasets/]" \
        config.TEST_MODEL=True \
        config.DATA.TEST.BATCHSIZE_PER_REPLICA=64 \
        config.DATA.TRAIN.BATCHSIZE_PER_REPLICA=64 \
        config.DISTRIBUTED.NUM_NODES=1 \
        config.DISTRIBUTED.NUM_PROC_PER_NODE=1 \
        config.CHECKPOINT.DIR="ats/tuned_models/proxy_fulltune_alexnet_goyal19_imagenet1k_CIFAR10" \
        config.MODEL.WEIGHTS_INIT.PARAMS_FILE="ats/models/converted_alexnet_jigsaw_in1k_pretext/converted_alexnet_jigsaw_in1k_pretext.torch" \
        config.MODEL.WEIGHTS_INIT.STATE_DICT_KEY_NAME="model_state_dict" \
        config.MODEL.WEIGHTS_INIT.APPEND_PREFIX="trunk." \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON=True \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_AND_HEAD=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY=False \
        config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_TRUNK_AND_HEAD=False \
        config.HOOKS.TENSORBOARD_SETUP.USE_TENSORBOARD=False \
        config.CHECKPOINT.OVERWRITE_EXISTING=False
```
</details>



***


## Usage

The package can be used by importing the package and using the various modules. The package can be imported as follows:

```python
import adversarial_transferability_study as ats
```

## Prototype Notebooks

The notebooks used for prototyping are stored under `\notebooks` directory and as a badge above.


## CI/CD

The repository uses [GitHub Actions]() for continuous integration and continuous deployment. The actions are triggered on pushes to the `main` branch. The actions are defined in the `.github/workflows` directory. The actions are defined as follows:

- `build.yml`: This action is triggered on pushes to the `main` branch. The action builds the package and runs the tests.

- `publish.yml`: This action is triggered on pushes to the `main` branch. The action builds the package and publishes the package to [PyPI]().

Unit tests are defined under the `tests` directory.

## Citation

If you find this repository useful, please cite the following paper:

```bibtex
@article{,
  title={},
  author={},
  journal={},
  year={},
  volume={},
  number={},
  pages={},
  doi={}
}
```

## Contact

For any questions or concerns, please open an issue or contact the authors.

