Choosing Public Datasets for Private Machine Learning via Gradient Subspace Distance

TMLR Paper2175 Authors

10 Feb 2024 (modified: 02 Jul 2024)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Differentially private stochastic gradient descent privatizes model training by injecting noise into each iteration, where the noise magnitude increases with the number of model parameters. Recent works suggest that we can reduce the noise by leveraging public data for private machine learning, by projecting gradients onto a subspace prescribed by the public data. However, given a choice of public datasets, it is unclear why certain datasets perform better than others for a particular private task, or how to identify the best one. We provide a simple metric which measures a low-dimensional subspace distance between gradients of the public and private examples. We empirically demonstrate that it is well-correlated with resulting model utility when using the public and private dataset pair (i.e., trained model accuracy is monotone in the distance), and thus can be used to select an appropriate public dataset. We provide theoretical analysis demonstrating that the excess risk scales with this subspace distance. This distance is easy to compute and robust to modifications in the setting.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Ellen_Vitercik1
Submission Number: 2175
Loading