Keywords: Federated Learning
TL;DR: This work uses the relative weight divergence between each client update and their aggregated update to cluster clients and govern the knowledge transfer between clusters to improve both the initial and personalized performance.
Abstract: The majority of federated learning (FL) approaches aim to learn either a high-performing global model or multiple personalized models. Although there has been significant progress in each research direction, the optimization of one often comes at the expense of the other. In this work, we approach this problem by investigating how different clusters of clients with varying degrees of data heterogeneity may impact the single global model. From this empirical analysis, we discover a surprising insight: despite a significant distribution mismatch between clusters, the knowledge shared from low data heterogeneous clusters to high data heterogeneous clusters can significantly boost the latter's personalized accuracy but not vice versa. By building on this observation, we propose a cluster-based approach named FedCUAU, in which clients are clustered based on their degree of data heterogeneity, and knowledge between each cluster is selectively transferred. Experimental results on standard FL benchmarks show that FedCUAU can be plugged into existing FL algorithms to achieve considerable improvement both the initial and personalized performance. Empirical results shows that FedCUAU improves FedAvg initial global accuracy by $1.53\%$ and $1.82\%$ for Cifar10 and FEMNIST respectively, and personalized accuracy by $0.29\%$ and $3.81\%$.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
4 Replies
Loading