FedCD: A Hybrid Federated Learning Framework for Efficient Training With IoT Devices

Published: 01 Jan 2024, Last Modified: 12 Nov 2024IEEE Internet Things J. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With billions of Internet of Things devices producing vast data globally, privacy and efficiency challenges arise in artificial intelligence applications. Federated learning (FL) has been widely adopted to train deep neural networks (DNNs) without privacy leakage. Existing centralized and decentralized FL (DFL) architectures have limitations, including memory burden, huge bandwidth pressure, and non independent and identically distributed (non-IID) data issues. This article introduces a novel hybrid FL framework, named FedCD, merging the benefits of both centralized and DFL architectures. FedCD strategically distributes the model based on layer sizes and consensus distances (i.e., the deviation between the local models and the global average models), effectively relieving network bandwidth pressures and accelerating training speed even under the non-IID setting. This method significantly mitigates resource constraints and improves model accuracy, offering a promising solution to the challenges in distributed machine learning. Extensive experimental results show the high effectiveness of FedCD. The total completion time of FedCD is reduced by 16.3%–53% and the average accuracy improvement is 1.85% compared to the baselines.
Loading