Performance Evaluation of Decentralized Federated Learning: Impact of Fully and K-Connected Topologies, Heterogeneous Computing Resources, and Communication Bandwidth

Published: 01 Jan 2025, Last Modified: 15 May 2025IEEE Access 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Decentralized federated learning (DFL) enables collaborative model training across distributed devices while preserving data privacy. Despite this, the performance of DFL in real-world scenarios, characterized by varying network topologies, heterogeneous client resources, and communication bandwidth, remains underexplored. This study analyzes the impact of network topologies, focusing on fully connected and k-connected structures, while examining how computing resource and communication bandwidth heterogeneity affect system efficiency using diverse datasets such as Fashion MNIST, CIFAR-10, and CIFAR-100. The results show that fully connected topologies offer robust communication but suffer from significant scalability issues. By contrast, k-connected networks provide a more scalable solution with reduced communication overhead. Furthermore, clients with limited CPU resources increase the convergence times by as much as 30%, while those with low communication bandwidth variations exacerbate the convergence times by up to 50%. These findings provide practical insights and guidelines for optimizing and designing scalable DFL systems suitable for real-world deployments.
Loading