CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness

Published: 09 Dec 2024, Last Modified: 27 Sept 2024AAAI 2025 (Under review)EveryoneRevisionsCC BY 4.0
Abstract: Federated learning (FL) has emerged as a promising frame- work for distributed machine learning. It enables collabora- tive learning among multiple clients, utilizing distributed data and computing resources. However, FL faces challenges in balancing privacy guarantees, communication efficiency, and overall model accuracy. In this work, we introduce CorBin- FL, a privacy mechanism that uses correlated binary stochas- tic quantization to achieve differential privacy while main- taining overall model accuracy. The approach uses secure multi-party computation techniques to enable clients to per- form correlated quantization of their local model updates without compromising individual privacy. We provide theo- retical analysis showing that CorBin-FL achieves parameter- level local differential privacy (PLDP), and that it asymp- totically optimizes the privacy-utility trade-off between the mean square error utility measure and the PLDP privacy mea- sure. We further propose AugCorBin-FL, an extension that, in addition to PLDP, achieves user-level and sample-level cen- tral differential privacy guarantees. For both mechanisms, we derive bounds on privacy parameters and mean squared er- ror performance measures. Extensive experiments on MNIST and CIFAR10 datasets demonstrate that our mechanisms out- perform existing differentially private FL mechanisms, in- cluding Gaussian and Laplacian mechanisms, in terms of model accuracy under equal PLDP privacy budgets.
Loading