Federated Learning with Heterogeneous Differential Privacy

TMLR Paper312 Authors

28 Jul 2022 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated learning (FL) takes a first step towards preserving privacy by training statistical models while keeping client data local. Models trained using FL may still indirectly leak private client information through model updates during training. Differential privacy (DP) can be employed on model updates to provide privacy guarantees within FL, typically at the cost of degraded accuracy of the final trained model. Both non-private FL and DP-FL can be solved using variants of the federated averaging (\textsc{FedAvg}) algorithm. In this work, we consider a heterogeneous DP setup where clients may require varying degrees of privacy guarantees. First, we analyze the optimal solution to a simplified linear problem with (heterogeneous) DP in a Bayesian setup. We find that unlike the non-private setup, where the optimal solution for homogeneous data amounts to a single global solution for all clients learned through \textsc{FedAvg}, the optimal solution for each client in this setup would be a personalized one even when data is homogeneous. We also analyze the privacy-utility tradeoff for this problem, where we characterize the gains obtained from the heterogeneous privacy where some clients opt for less stringent privacy guarantees. We propose a new algorithm for federated learning with heterogeneous DP, referred to as \textsc{FedHDP}, which employs personalization and weighted averaging at the server using privacy choices by clients, to achieve the Bayes optimal solution on a class of liner problems for all clients. Through numerical experiments we show that \textsc{FedHDP} provides up to $9.27\%$ performance gain compared to the baseline DP-FL for the considered datasets where $5\%$ of clients opt out of DP. Additionally, we show a gap in the average performance of local models between non-private and private clients of up to $3.49\%$, empirically illustrating that the baseline DP-FL might incur a large utility cost when not all clients require the stricter privacy guarantees.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Revised according to the feedback from reviewers. Please see the comments that summarizes the revision.
Assigned Action Editor: ~Kangwook_Lee1
Submission Number: 312
Loading