Abstract: Federated learning (FL) has become one of the mainstream paradigms for multi-party collaborative learning with privacy protection. As it is difficult to guarantee all FL devices to be active simultaneously, a common approach is to only use a partial set of devices to participate in each round of model training. However, such partial device participation may introduce significant bias on the trained model. In this paper, we first conduct a theoretical analysis to investigate the negative impact of biased device participation and derive the convergence rate of FedAvg, the most well-known FL algorithm, under biased device participation. We further propose an optimized participation-aware federated learning algorithm called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">AdaFed</i> , which can adaptively tune the aggregation weight of each device based on its historical participation records and remove the bias introduced by partial device participation. To be more rigorous, we formally prove the convergence guarantee of AdaFed. Finally, we conduct trace-driven experiments to validate the effectiveness of our proposed algorithm. The experimental results are consistent with our theoretical analysis and show that AdaFed improves the global model accuracy and converges much faster than the state-of-the-art FL algorithms by eliminating the negative effect of biased device participation.
0 Replies
Loading