Asymmetrically Decentralized Federated Learning

Published: 01 Jan 2025, Last Modified: 15 Oct 2025IEEE Trans. Computers 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: To address the communication burden and privacy concerns associated with the centralized server in Federated Learning (FL), Decentralized Federated Learning (DFL) has emerged, which discards the server with a peer-to-peer (P2P) communication framework, significantly expanding the application scenarios of FL. However, most existing DFL algorithms are based on symmetric topologies, such as ring and grid topology, which can easily lead to deadlocks and are susceptible to the impact of network link quality in practice. To address these issues, we propose DFedSGPSM, a transitional framework that converts symmetric DFL optimizers into asymmetric variants. By adopting the Push-Sum protocol in asymmetric network topologies, our framework successfully circumvents the deadlock and link-quality issues prevalent in symmetric configurations. To further validate the effectiveness of our algorithm framework, we integrate the local momentum (in DFedAvgM) and SAM (in DFedSAM) from existing symmetric DFL optimizer into DFedSGPSM to accelerate training and pursue smooth local minimum, which enables existing symmetric DFL optimizers to be seamlessly integrated into asymmetric DFL. Theoretical analysis proves that DFedSGPSM achieves a linear speedup rate of $ \mathcal{O}\left(\frac{1}{\sqrt{nT}}\right)$ in the non-convex setting. This analysis also reveals crucial issues such as tighter upper bounds achieved with improved topological connectivity. Empirically, extensive experiments conducted on the MNIST, CIFAR10&100 datasets demonstrate the superior performance of our proposed algorithm compared to several existing SOTA optimizers in terms of generalization.
Loading