Correlated Noise Provably Beats Independent Noise for Differentially Private Learning

Published: 28 Oct 2023, Last Modified: 11 Dec 2023FL@FM-NeurIPS’23 PosterEveryoneRevisionsBibTeX
Student Author Indication: No
Keywords: differential privacy, correlated noise mechanisms, linear regression, asymptotics
TL;DR: We prove the benefits of correlated noise for DP optimization in linear regression. Using the theory, we derive an orders-of-magnitude more efficient correlated noise generation algorithm that nearly matches SOTA for private deep learning.
Abstract: Differentially private learning algorithms inject noise into the learning where the most common private learning algorithm, DP-SGD, adds independent Gaussian noise in each iteration. Motivated by the practical considerations in federated learning, recent work on matrix factorization mechanisms has shown empirically that introducing correlations in the noise can greatly improve their utility. We characterize the asymptotic objective suboptimality for any choice of the correlation function, giving precise analytical bounds for linear regression. We show, using these bounds, how correlated noise provably improves upon vanilla DP-SGD as a function of problem parameters such as the effective dimension and condition number. Moreover, our analytical expression for the near-optimal correlation function circumvents the cubic complexity of the semi-definite program used to optimize the noise correlation in prior work. We validate these theoretical results with experiments on private deep learning in both centralized and federated settings. Our work matches or outperforms prior work while being efficient both in terms of computation and memory.
Submission Number: 15
Loading