DP-LSSGD: An Optimization Method to Lift the Utility in Privacy-Preserving ERMDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
TL;DR: We propose a differentially private Laplacian smoothing stochastic gradient descent to train machine learning models with better utility and maintain differential privacy guarantees.
Abstract: Machine learning (ML) models trained by differentially private stochastic gradient descent (DP-SGD) have much lower utility than the non-private ones. To mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees. At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism. Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but a better utility especially for the scenarios with strong DP guarantees. In practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better. The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible. DP-LSSGD is applicable to train a large variety of ML models, including DNNs.
Keywords: Privacy-preserving ERM, Laplacian Smoothing, Improve Utility
Original Pdf: pdf
7 Replies

Loading