Abstract: Federated learning (FL) has been widely applied in medical field, which allows clients to collaboratively train global models without sharing local data. Nevertheless, the diversity and scarcity of samples from rare diseases may result in a decline in the performance of local models on client-side due to using a singular global model. Moreover, direct transmission of local models or parameters will likely lead to user privacy violations. To solve these problems, we propose a Grouped Federated Meta-Learning (GrFML) method to improve the performance of local personalization models while protecting data privacy. Specifically, we first utilize a self-attention mechanism to extract partial features from the client’s local data, which are uploaded to the server (medical data is susceptible to perturbation and data integrity, thus this process does not expose the private data). The server groups clients with similar features based on these extracted features. Then, multiple meta-models are trained on these groups and distributed back to the clients to enhance the performance of the client’s local models. Furthermore, during the FL process, we introduce dynamic perturbation to the uploaded gradients based on the model’s test accuracy to protect their privacy. Typically, the perturbation magnitude is directly proportional to the model’s test accuracy. Extensive experiments shown that the GrFML model significantly improves client personalization model accuracy and achieves a good privacy-utility trade-off.
Loading