Keywords: Privacy preserving machine learning, dp-sgd, public data in privacy
TL;DR: improving the effect of public data in DP-SGD and improving the accuracy significantly
Abstract: A key challenge towards differentially private machine learning is balancing the trade-off between privacy and utility.
A recent line of work has demonstrated that leveraging \emph{public data samples} can enhance the utility of DP-trained models (for the same privacy guarantees).
In this work, we show that public data can be used to improve utility in DP models significantly more than shown in recent works.
Towards this end, we introduce a modified DP-SGD algorithm that leverages public data during its training process.
Our technique uses public data in two complementary ways: (1) it uses generative models trained on public data to produce synthetic data that is effectively embedded in multiple steps of the training pipeline; (2) it uses a new gradient clipping mechanism (required for achieving differential privacy) which changes the \emph{origin} of gradient vectors using information inferred from available public and synthesized data.
Our experimental results demonstrate the effectiveness of our approach in improving the state-of-the-art in differentially private machine learning across multiple datasets, network architectures, and application domains.
Notably, we achieve a $75\%$ accuracy on CIFAR10 when using only $2,000$ public images; this is \emph{significantly higher} than the state-of-the-art which is $68\%$ for DP-SGD with the privacy budget of $\varepsilon=2,\delta=10^{-5}$ (given the same number of public data points).
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
9 Replies
Loading