Improving Differentially-Private Deep Learning with Gradients Index PruningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Differential Privacy, Stochastic Gradient Descent
Abstract: Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries from inferring an individual record from populations. Differentially Private Stochastic Gradient Descent (DPSGD), the widely used method to train a model satisfying DP, inserts randomized noise to the gradients in each iteration but leads to significant accuracy decline, particularly on large and deep models. Facing the curse of dimensionality in differentially-private deep learning, we propose a Gradient Index Pruning (GIP) mechanism, which prunes gradients by a novel index perturbation scheme, to preserve important components of the gradients while reducing their sizes. Our mechanism does not alter the model, but merely adds a noisy top-$k$ pruning step before the conventional gradients noise insertion in DPSGD. It is proven that GIP meets DP, yet improves accuracy over DPSGD. We also present theoretical analysis to show GIP indeed introduces less perturbation to the training. Experiments on a variety of models and datasets have demonstrated that GIP exceeds the state-of-the-art differentially-private deep learning methods by a $1-3\%$ accuracy boost.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
Supplementary Material: zip
14 Replies

Loading