Privacy-preserving Learning via Deep Net PruningDownload PDF

28 Sept 2020 (modified: 06 Nov 2024)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Neural Network Pruning, Differential Privacy
Abstract: Neural network pruning has demonstrated its success in significantly improving the computational efficiency of deep models while only introducing a small reduction on final accuracy. In this paper, we explore an extra bonus of neural network pruning in terms of enhancing privacy. Specifically, we show a novel connection between magnitude-based pruning and adding differentially private noise to intermediate layers under the over-parameterized regime. To the best of our knowledge, this is the first work that bridges pruning with the theory of differential privacy. The paper also presents experimental results by running the model inversion attack on two benchmark datasets, which supports the theoretical finding.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: This paper shows a novel connection between magnitude-based pruning and adding differentially private noise to intermediate layers, under the over-parameterized regime
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=H5iOhrJL5M
9 Replies

Loading