Client Selection With Staleness Compensation in Asynchronous Federated LearningDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 12 May 2023IEEE Trans. Veh. Technol. 2023Readers: Everyone
Abstract: As a nascent privacy-preserving machine learning (ML) paradigm, federated learning (FL) leverages distributed clients at the network edge to collaboratively train an ML model. Asynchronous FL overcomes the straggler issue in synchronous FL. However, asynchronous FL incurs the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">staleness</i> problem, which degrades the training performance of FL over wireless networks. To tackle the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">staleness</i> problem, we develop a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">staleness</i> compensation algorithm to improve the training performance of FL in terms of convergence and test accuracy. By including the first-order term in Taylor expansion of the gradient function, the proposed algorithm compensates the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">staleness</i> in asynchronous FL. To further minimize training latency, we model the client selection for asynchronous FL as a multi-armed bandit problem. We develop an online client selection algorithm to minimize training latency without prior knowledge of the channel condition or local computing status. Simulation results show that the proposed algorithm outperforms the baseline algorithms in both test accuracy and training latency.
0 Replies

Loading