Training machine learning models with differential privacy (DP) is commonly done using first-order methods such as DP-SGD. In the non-private setting, second-order methods try to mitigate the slow convergence of first-order methods. The DP methods that use second-order information still provide faster convergence, however the existing methods cannot be easily turned into federated learning (FL) algorithms without an excessive communication cost required by the exchange of the Hessian or feature covariance information between the nodes and the server. In this paper we propose DP-FedNew, a DP method for FL that uses second-order information and results in per-iteration communication cost similar to first-order methods such as DP Federated Averaging. Experiments on last layer fine tuning of deep convolutive networks demonstrate that our proposed algorithm is very competitive compared to first- and second-order baselines for both record- and user-level DP with different privacy budget values. We also consider a variant that avoids excessive memory and compute requirements at the edge devices and provide a theoretical analysis for the method that illustrates its stability.
Keywords: Differential Privacy, Federated Learning, Second-Order Methods, Communication Efficiency
TL;DR: We propose a DP FL method that uses second-order information and has similar communication cost as first-order methods.
Abstract:
Submission Number: 32
Loading