*** This folder is a modified copy of https://github.com/asyml/vision-transformer-pytorch and could be used to produce the ViT related experiments in the paper. 

    Before start, please download the vit checkpoint "imagenet21k+imagenet2012_ViT-B_32.pth" from https://drive.google.com/drive/folders/1azgrD1P413pXLJME0PjRRU-Ez-4GWN-S and put it under folder weights/pytorch
    And things to modify: under src/config.py, change --data-dir arguments in get_val_nc_config() and get_ft_config()
    
    ##########################################################
    Example usage of the code are:
    
    # (Figure 5) For validating the neural collapse metric of each layer
    python src/validate_all_layers.py --model-arch b32 --checkpoint-path weights/pytorch/ --pretrain_model_name imagenet21k+imagenet2012_ViT-B_32.pth --image-size 384 --batch-size 32 --dataset CIFAR10
    
    # (Figure 5) For transfer learning experiments on different layers
    python src/transfer_layers.py --model-arch b32 --checkpoint-path weights/pytorch/ --pretrain_model_name imagenet21k+imagenet2012_ViT-B_32.pth --image-size 384 --batch-size 32 --dataset CIFAR10 --int_layer <layers to do transfer learning (0-11)>
    
    # (Table 2) For do transfer learning (only train the linear classifier)
    python src/fine_tune_chosen_layer.py --model-arch b32 --checkpoint-path weights/pytorch/ --pretrain_model_name imagenet21k+imagenet2012_ViT-B_32.pth --image-size 384 --batch-size 32 --dataset CIFAR100 --int_layers 
    
    # (Table 2) For do transfer learning (train the penultimate layer with the linear classifier)
    python src/fine_tune_chosen_layer.py --model-arch b32 --checkpoint-path weights/pytorch/ --pretrain_model_name imagenet21k+imagenet2012_ViT-B_32.pth --image-size 384 --batch-size 32 --dataset CIFAR100 --int_layers 11