FedADDP: Privacy-Preserving Personalized Federated Learning with Adaptive Dimensional Differential Privacy

Published: 01 Jan 2024, Last Modified: 10 Nov 2025ICA3PP (5) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Personalized Federated Learning (PFL) has gained significant attention for its superior capability in customizing model training. In response to the potential privacy risks associated with PFL, researchers apply differential privacy techniques to personalized federated learning, which protects privacy by clipping and adding noise to parameters. Existing methods utilize a uniform clipping threshold for all clients, which significantly reduces model accuracy. This reduction arises because these methods overlook the diversity in model parameters. To enhance the accuracy of personalized models under privacy constraints, we propose an Adaptive Dimensional Differential Privacy framework for Personalized Federated Learning (FedADDP). The framework utilizes Fisher information matrix to evaluate the sensitivity of parameters, effectively differentiating personalized parameters tailored to individual clients from global parameters consistent across all clients. Global parameters are handled through global consistency regularization and GlobalRobust Loss to ensure stability across different clients. Furthermore, we propose an adaptive dimensional differential privacy mechanism, which dynamically adjusts the clipping threshold for each dimension using historical gradient information, thus mitigating the accuracy loss of the personalized model. Experiments on the FEMNIST, SVHN, and CIFAR-10 datasets show that FedADDP improves accuracy by 1.67% to 23.12% across a variety of privacy levels and non-IID data distributions. The code is available at https://github.com/yyguo-xdu/FedADDP.
Loading