Adaptive Hyperparameter Selection for Differentially Private Gradient Descent

Published: 03 Oct 2023, Last Modified: 03 Oct 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We present an adaptive mechanism for hyperparameter selection in differentially private optimization that addresses the inherent trade-off between utility and privacy. The mechanism eliminates the often unstructured and time-consuming manual effort of selecting hyperparameters and avoids the additional privacy costs that hyperparameter selection otherwise incurs on top of that of the actual algorithm. We instantiate our mechanism for noisy gradient descent on non-convex, convex and strongly convex loss functions, respectively, to derive schedules for the noise variance and step size. These schedules account for the properties of the loss function and adapt to convergence metrics such as the gradient norm. When using these schedules, we show that noisy gradient descent converges at essentially the same rate as its noise-free counterpart. Numerical experiments show that the schedules consistently perform well across a range of datasets without manual tuning.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Compared to the previous revision, we have - included new experiments. We have added - a comparison to output perturbation, - an additional dataset (CIFAR-10) and - features extracted from Scattering Networks - updated our introduction and related work section with additional references.
Assigned Action Editor: ~Antti_Honkela1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1000
Loading