Keywords: DPSGD, Differential Privacy, Shuffling, Poisson subsampling
TL;DR: Establishes new lower bounds on privacy analysis of DP-SGD with shuffling and provides a first comparative study of DP-SGD with Shuffling vs Poisson Subsampling in the light of the gaps in privacy analysis between the two approaches.
Abstract: We provide new lower bounds on the privacy guarantee of _multi-epoch_ Adaptive Batch Linear Queries (ABLQ) mechanism with _shuffled batch sampling_, demonstrating substantial gaps when compared to _Poisson subsampling_; prior analysis was limited to a single epoch.
Since the privacy analysis of Differentially Private Stochastic Gradient Descent (DP-SGD) is obtained by analyzing the ABLQ mechanism, this brings into serious question the common practice of implementing Shuffling based DP-SGD, but reporting privacy parameters as if Poisson subsampling was used.
To understand the impact of this gap on the utility of trained machine learning models, we introduce a novel practical approach to implement Poisson subsampling _at scale_ using massively parallel computation, and efficiently train models with the same.
We provide a comparison between the utility of models trained with Poisson subsampling based DP-SGD, and the optimistic estimates of utility when using shuffling, via our new lower bounds on the privacy guarantee of ABLQ with shuffling.
Primary Area: Privacy
Submission Number: 19922
Loading