
We Have Built Our Experiments Based On The Publicly Available Code By The Authors Of Feddyn


Create the directory Output if not already present and download the data to the folder Data
install the requirements using

pip install -r requiremetns.txt

Commands to run the code

Cifar-100 Experiments


Go to the CIFAR-100 code folder and run the following commands, these commands uses Dirichlet distribution with heterogenity of 0.3

FedAvg:
python example_cifar_10.py --model_name=cifar100 --dataset_name=CIFAR100 --add_reg=0 --unbalanced_sgm=0 --rule=Dirichlet --rule_arg=0.3  --alg_name=FedAvg --lamda=20.0 --epoch=5 --lr_decay_per_round=0.998 --learning_rate=0.1  --ntd=0 --uniform_distill=0 --entropy_flag=1 --dist_beta_kl=1.0 --dist_beta=1 --disco=0


FedAvg+ASD:
python example_cifar_10.py --model_name=cifar100 --dataset_name=CIFAR100 --add_reg=1 --unbalanced_sgm=0 --rule=Dirichlet --rule_arg=0.3  --alg_name=FedAvgReg --lamda=20.0 --epoch=5 --lr_decay_per_round=0.998 --learning_rate=0.1   --ntd=0 --uniform_distill=0 --entropy_flag=1 --dist_beta_kl=1.0 --dist_beta=1 --disco=0


For FedProx change alg_name to FedProx and for Fedprox + ASD change the alg_name to FedProx and add_reg to 1. 

In the file run_exp.sh all the commands to run different algorithms are provided.



To the change the heterogeneity, change the value of argument "rule_arg" in the above commands to say 0.6.

For iid cases set the argument "rule" in the above commands to iid.

Tiny-ImageNet Experiments

Go to the tinyimagenet folder, download the data tinyimagenet.pt and then run the following commands

FedAvg:
python example_cifar_10.py --model_name=ConvNet --dataset_name=TinyImageNet --add_reg=0 --unbalanced_sgm=0 --rule=Dirichlet --rule_arg=0.3 --alg_name=FedAvg --lamda=30.0 --epoch=5 --lr_decay_per_round=0.998 --learning_rate=0.1 --ntd=0 --uniform_distill=0 --entropy_flag=1 --dist_beta_kl=1.0 --dist_beta=1 --disco=0

FedAvg+ASD:
python example_cifar_10.py --model_name=ConvNet --dataset_name=TinyImageNet --add_reg=1 --unbalanced_sgm=0 --rule=Dirichlet --rule_arg=0.3 --alg_name=FedAvgReg --lamda=30.0 --epoch=5 --lr_decay_per_round=0.998 --learning_rate=0.1 --ntd=0 --uniform_distill=0 --entropy_flag=1 --dist_beta_kl=1.0 --dist_beta=1 --disco=0


After the experiment is completed a file named  "500_com_tst_perf_all.npy" generated in the Output folder, this file contains the accuracy of the averaged model of the clients after every communication round, this is used for plotting and reporting the results in the main paper.













        
