Keywords: Federated Learning, Reproducibility Study, Image Classification, Sentiment Analysis, Logistic Regression, Multi-layer Perceptron
TL;DR: Replicibility study of the paper "Towards understanding biased client selection in federated learning".
Abstract: Federated learning is a distributed optimization algorithm that enables cooperatively training a machine learning model on resource limited client nodes. Such a decentralized model training approach does not require data exchange from client devices to global servers, therefore protecting data privacy and enhancing the model’s generalizability by training on heterogeneous data. In this work, we perform a reproducibility study of a recent paper “Towards understanding biased client selection in federated learning” [1]. We reproduce the majority of the various experiments to validate the claim of the original paper that a biased client selection strategy can significantly speed up the training convergence of federated learning compared to a conventional random client selection strategy. In addition to reproduction, we explored the performance of the proposed algorithm with different hyperparameters. The implemented code, training metrics, hyperparameters and resulted models are open-sourced on DagsHub for easy reproduction.
Submission Number: 3
Loading