Asynchronous Semi-Supervised Federated Learning with Provable Convergence in Edge Computing

Published: 01 Jan 2022, Last Modified: 02 Aug 2025IEEE Netw. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Traditional federated learning methods assume that users have fully labeled data in their device for training, but in practice, labels are difficult to obtain due to various reasons such as user privacy concerns, high labeling costs, and lack of expertise. Semi-supervised learning has been introduced into federated learning scenarios to address the lack of labels, but performance suffers from slow training and non-convergence in real network environments. In this article, we propose Federated Incremental Learning (FedIL) as a semi-supervised federated learning (SSFL) framework in edge computing to overcome the limitations of SSFL. FedIL introduces a group-based asynchronous training algorithm with provable convergence, which accelerates model training by allowing more clients to participate simultaneously. We developed a prototype system and performed track-driven simulations to demonstrate FedIL's superior performance.
Loading