Decentralized Federated Learning Under Communication DelaysDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 12 May 2023SECON Workshops 2022Readers: Everyone
Abstract: Federated learning (FL) provides a privacy-preserving approach to training algorithms across multiple edge agents or servers without sharing raw data. Based on dynamic network topologies, we study a fully decentralized FL framework utilizing gradient descent with momentum (MGD) to accelerate the convergence rate. Due to bounded time-varying transmission delays, model updates are asynchronous between different agents, which leads to time-varying information update delay. Extensive experiments are carried out to demonstrate the performance of DMFL over some related algorithms and further analyze the influence of information update delay, network size, and the data distribution on the convergence performance of DMFL.
0 Replies

Loading