PFedSA: Personalized Federated Multi-Task Learning via Similarity Awareness
Abstract: Federated Learning (FL) constructs a distributed machine learning framework that involves multiple remote clients collaboratively training models. However in real-world situations, the emergence of non-Independent and Identically Distributed (non-IID) data makes the global model generated by traditional FL algorithms no longer meet the needs of all clients, and the accuracy is greatly reduced. In this paper, we propose a personalized federated multi-task learning method via similarity awareness (PFedSA), which captures the similarity between client data through model parameters uploaded by clients, thus facilitating collaborative training of similar clients and providing personalized models based on each client’s data distribution. Specifically, it generates the intrinsic cluster structure among clients and introduces personalized patch layers into the cluster to personalize the cluster model. PFedSA also maintains the generalization ability of models, which allows each client to benefit from nodes with similar data distributions when training data, and the greater the similarity, the more benefit. We evaluate the performance of the PFedSA method using MNIST, EMNIST and CIFAR10 datasets, and investigate the impact of different data setting schemes on the performance of PFedSA. The results show that in all data setting scenarios, the PFedSA method proposed in this paper can achieve the best personalization performance, having more clients with higher accuracy, and it is especially effective when the client’s data is non-IID.
Loading