Towards Efficient Decentralized Federated Learning: A Survey

Published: 01 Jan 2024, Last Modified: 21 May 2025ADMA (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) is a distributed machine learning technique that has been increasingly adopted across many applications due to its capability to disseminate clients’ local knowledge while preserving their privacy. Systems that rely on Centralized Federated Learning (CFL) require a central entity to aggregate a global model. However, this centralized structure can result in higher latency and increased vulnerability to attacks or failures. Decentralized Federated Learning (DFL), which relies on direct communication between clients to train models collaboratively, has emerged as an efficient alternative to CFL by avoiding dependence on a central server. In this study, we conduct a comprehensive survey of various approaches proposed to optimize the performance and efficiency of DFL in terms of memory, communication, and computation, and address divergent datasets among clients. First, we introduce the DFL framework and highlight the pertinent challenges. Then, we explore the existing methods and categorize them based on their mechanisms to address system heterogeneity and data heterogeneity in DFL. Finally, we highlight some application scenarios of DFL.
Loading