Agent-Level Differentially Private Federated Learning via Compressed Model PerturbationDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 29 Sept 2023CNS 2022Readers: Everyone
Abstract: Federated learning (FL) involves a network of distributed agents that collaboratively learn a common model without sharing their raw data. Privacy and communication are two critical concerns of FL, but they are often treated separately in the literature. While random noise can be added during the FL process to defend against privacy inference attacks, its magnitude is linearly proportional to the model size, which can be very large for modern deep neural networks and lead to severe degradation in model accuracy. On the other hand, various compression techniques have been proposed to improve the communication efficiency of federated learning, but their interplay with privacy protection is largely ignored. Motivated by the observation that privacy protection and communication reduction are closely related in the context of FL, we propose a new federated learning scheme called CMP-Fed that achieves agent-level differential privacy with high model accuracy by leveraging the communication compression techniques in FL with large model sizes. The key component of CMP-Fed is compressed model perturbation (CMP), which first compresses the shared model updates before perturbing them with random noise at each communication round of federated learning. Experimental results based on Fashion-MNIST dataset show that CMP-Fed can largely outperform the existing differentially private federated learning schemes in terms of model accuracy under the same privacy guarantee while still enjoying the communication benefit of model compression.
0 Replies

Loading