DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing
Keywords: Federated Learning, Differential Privacy, Matrix Mechanism, Packed Secret Sharing
TL;DR: We achieve better privacy-utility tradeoff for Federated Learning with local differential privacy using the matrix mechanism and cryptographic techniques.
Abstract: Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
Differential Privacy (DP) has become the main measure of privacy in the FL setting.
DP comes in two flavors: central and local.
In the former, a centralized server is trusted to receive the users' raw gradients from a training step, and then perturb their aggregation with some noise before releasing the next version of the model.
In the latter (more private) setting, noise is applied on users' local devices, and only the aggregation of users' noisy gradients is revealed even to the server.
Great strides have been made in increasing the privacy-utility trade-off in the central DP setting, by utilizing the so-called \emph{matrix mechanism}.
However, progress has been mostly stalled in the local DP setting.
In this work, we introduce the \emph{distributed} matrix mechanism to achieve the best-of-both-worlds; local DP and also better privacy-utility trade-off from the matrix mechanism.
We accomplish this by proposing a cryptographic protocol that securely transfers sensitive values across rounds, which makes use of \emph{packed secret sharing}.
This protocol accommodates the dynamic participation of users per training round required by FL, including those that may drop out from the computation.
We provide experiments which show that our mechanism indeed significantly improves the privacy-utility trade-off of FL models compared to previous local DP mechanisms, with little added overhead.
Submission Number: 17
Loading