Shielding Federated Learning: Aligned Dual Gradient Pruning Against Gradient LeakageDownload PDF

16 May 2022 (modified: 05 May 2023)NeurIPS 2022 SubmittedReaders: Everyone
Abstract: Federated learning (FL) is a distributed learning framework that claims to protect user privacy. However, gradient inversion attacks (GIAs) reveal severe privacy threats to FL, which can recover the users' training data from outsourced gradients. Existing defense methods adopt different techniques, e.g., differential privacy, cryptography, and gradient perturbation, to against the GIAs. Nevertheless, all current state-of-the-art defense methods suffer from a trade-off between privacy, utility, and efficiency in FL. To address the weaknesses of existing solutions, we propose a novel defense method, Aligned Dual Gradient Pruning (ADGP), based on gradient sparsification, which can improve communication efficiency while preserving the utility and privacy of the federated training. Specifically, ADGP slightly changes gradient sparsification with a stronger privacy guarantee. Through primary gradient parameter selection strategies during training, ADGP can also significantly improve communication efficiency with a theoretical analysis of its convergence and generalization. Our extensive experiments show that ADGP can effectively defend against the most powerful GIAs and significantly reduce the communication overhead without sacrificing the model's utility.
Supplementary Material: zip
29 Replies

Loading