Unbounded Gradients in Federated Learning with Buffered Asynchronous AggregationDownload PDF

23 Sept 2022, 13:23 (modified: 09 Nov 2022, 23:07)FL-NeurIPS 2022 PosterReaders: Everyone
Keywords: Asynchronous Communication, Buffered Aggregation, Federated Learning, Asynchronous Federated Learning, Distributed Optimization
TL;DR: We revisit the FedBuff algorithm for asynchronous federated learning and extend the existing analysis by removing the boundedness assumptions from the gradient norm.
Abstract: Synchronous updates may compromise the efficiency of cross-device federated learning once the number of active clients increases. The FedBuff algorithm (Nguyen et al.) alleviates this problem by allowing asynchronous updates (staleness), which enhances the scalability of training while preserving privacy via secure aggregation. We revisit the FedBuff algorithm for asynchronous federated learning and extend the existing analysis by removing the boundedness assumptions from the gradient norm. This paper presents a theoretical analysis of the convergence rate of this algorithm when heterogeneity in data, batch size, and delay are considered.
Is Student: Yes
4 Replies

Loading