Double Perturbation-Based Privacy-Preserving Federated Learning against Inference Attack

Published: 2022, Last Modified: 15 May 2025GLOBECOM 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) is a well discussed distributed training framework, which allows scattered clients to collaboratively train a central model without directly sharing raw data. However, recent researches have stated that the model updates or gradients uploaded by FL can be used to infer sensitive data of clients, and this attack poses severe threats to FL. Several solutions are developed to address this threat. Although these solutions can achieve privacy preservation to a certain extent, their accuracy is severely degraded, and they are unable to provide strong privacy protection. Under this background, we propose a double perturbation-based privacy-preserving federated learning method, in which a feature extractor and an additional blurry function are utilized to improve the objective function of Conditional Generative Adversarial Networks (CGANs) and the generated data by CGANs are mixed with real data to construct fake-training data. Meanwhile, we design an algorithm to perturb the information contained in the gradient of fully connected layers that is most favorable for the attacker to reconstruct data. Finally, simulation results show that our developed method can effectively resist inference attack with a negllgible decline in accuracy.
Loading