Abstract: Federated learning (FL) is a distributed model training paradigm that preserves clients’ data privacy. It has gained tremendous attention from both academia and industry. FL hyperparameters (e.g., the number of selected clients and the number of training passes) significantly affect the training overhead in terms of computation time, transmission time, computation load, and transmission load. However, the current practice of manually selecting FL hyperparameters imposes a heavy burden on FL practitioners because applications have different training preferences. In this article, we propose FedTune, an automatic FL hyperparameter tuning algorithm tailored to applications’ diverse system requirements in FL training. FedTune iteratively adjusts FL hyperparameters during FL training and can be easily integrated into existing FL systems. Through extensive evaluations of FedTune for diverse applications and FL aggregation algorithms, we show that FedTune is lightweight and effective, achieving 8.48%–26.75% system overhead reduction compared to using fixed FL hyperparameters. This article assists FL practitioners in designing high-performance FL training solutions. The source code of FedTune is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/DataSysTech/FedTune</uri> .
0 Replies
Loading