On the Convergence and Calibration of Deep Learning with Differential PrivacyDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Deep Learning, Differential Privacy, Optimization Algorithms, Convergence Theory, Calibration
Abstract: In deep learning with differential privacy (DP), the neural network achieves the privacy usually at the cost of slower convergence (and thus lower performance) than its non-private counterpart. This work gives the first convergence analysis of the DP deep learning, through the lens of training dynamics and the neural tangent kernel (NTK) matrix. Our convergence theory successfully characterizes the effects of two key components in the DP training: the per-sample clipping and the noise addition. We initiate a general principled framework to understand the DP deep learning with any network architecture, loss function and various optimizers including DP-Adam. Our analysis also motivates a new clipping method, the 'global clipping', that significantly improves the convergence, as well as preserves the same DP guarantee and computational efficiency as the existing method, which we term as 'local clipping'. In addition, our global clipping is surprisingly effective at learning calibrated classifiers, in contrast to the existing DP classifiers which are oftentimes over-confident and unreliable. Implementation-wise, the new clipping can be realized by inserting one line of code into the Pytorch Opacus library.
One-sentence Summary: We initiate convergence analysis of private deep learning and propose new clipping method for private optimizers, which significantly and provably improves convergence and calibration.
Supplementary Material: zip
15 Replies

Loading