Tackling Periodic Distribution Shifts in Federated Learning with Half-Cycle Knowledge Distillation

Published: 2024, Last Modified: 27 Nov 2025ICONIP (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) has emerged as a promising approach for collaborative machine learning without the need to share local data, thus preserving privacy and reducing communication overhead. However, the periodic distribution shift caused by the varying availability of clients poses significant challenges to training stability and performance. In this paper, we propose federated half-cycle knowledge distillation (FedHCKD) that incorporates a regularization scheme to address these challenges. By leveraging models trained in a half cycle to regularize current training, our method mitigates the adverse effects of client distribution shifts and enhances the generalization capability of models, especially for clients with fewer training samples. Experimental results on the CIFAR and EMNIST datasets demonstrate that our approach significantly outperforms traditional FL methods, offering improved efficiency and robustness in scenarios with periodic shifts in client data distribution.
Loading