Elevating Privacy in Federated Learning: An Efficient Approach with SAM Optimization for Personalized Local Models
Abstract: A substantial body of research indicates that the risk of privacy leakage is associated with parameter sharing in federated learning. Existing research solutions widely employ differential privacy, backed by robust theoretical foundations, to protect the shared parameters in federated learning. However, the application of differential privacy often results in a significant reduction in model effectiveness, especially in scenarios with substantial data heterogeneity. In this paper, we propose a privacy protection scheme for personalized federated learning based on local efficient optimization. This scheme sensibly applies the characteristics of differential privacy and data heterogeneity. Given the sensitivity of differential privacy to the norm of update information, we adopt a strategy of uploading shallow parameter information instead of the traditional approach of updating all parameters. This effectively reduces the impact of clipping in differential privacy on update information. Additionally, we utilize the Sharpness Aware Minimization (SAM) optimizer to train the uploaded shallow parameters robustly, alleviating the performance degradation caused by the addition of noise in differential privacy. To validate the effectiveness of our proposed approach, we conducted comparisons with existing state-of-the-art experimental solutions on the EMNIST, CIFAR-10 and CIFAR-100 datasets. The results demonstrate that our approach exhibits the best performance across all datasets.
Loading