Federated learning with a Balanced Heterogeneous-Yoke and Loose restriction

Published: 01 Jan 2024, Last Modified: 11 Apr 2025Internet Things 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The most critical challenges of federated learning for joint modeling via model sharing are the problem of model drift due to the heterogeneous distribution of local data, and the cooperation of a subset of random devices due to communication constraints. To address these dilemmas, we propose a novel Federated Learning algorithm with a Balanced Heterogeneous-Yoke and Loose restriction(FedByL), which chooses to exploit more historical gradients from the client to accelerate model convergence; in addition, a more relaxed local restriction is chosen in local updates, and an incentive clause is proposed to encourage cooperative clients to find the global optimum. Experiments and analyses on 95 sets of five real-world datasets and one synthetic dataset show that FedByL achieves better in terms of convergence speed and accuracy compared to superior algorithms (e.g. FedDyn, SCAFFOLD). In particular, FedByL communication overhead is the same as FedAvg, being lightweight and requiring only one additional storage variable locally, which greatly reduces the communication burden and the risk of information being hijacked.
Loading