FedSOKD-TFA: Federated Learning with Stage-Optimal Knowledge Distillation and Three-Factor Aggregation
Abstract: Federated learning is a model training method that protects user data and privacy, making it a feasible solution for multi-user collaborative training. However, due to the heterogeneity of data among clients, the optimization direction of each model is different, resulting in poor model training effects and accuracy fluctuations during training. To solve this problem, we introduce a stage-optimal strategy and propose a stage-optimal knowledge distillation method. The proposed method keeps the optimal local models and optimizes the subsequent training of the models through knowledge distillation to reduce the loss of learned knowledge. Additionally, we propose a new aggregation method that considers both static and dynamic factors. For evaluation, we conducted experiments on the CIFAR10 and CIFAR100 datasets. The proposed method significantly improved performance, achieving a maximum accuracy gain of \(13.07\%\) over the baseline model of FedPer and attaining state-of-the-art performance. The code is available at the following link: https://github.com/FedSOKD-TFA/FedSOKD-TFA.
External IDs:dblp:conf/icpr/LiuGSLJG24
Loading