A New Linear Scaling Rule for Differentially Private Hyperparameter Optimization

09 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: Differential privacy, deep learning
TL;DR: We provide new SOTA methods for DP image classification when fine-tuning that account for the cost of hyperparameter search
Abstract: A major direction in differentially private (DP) machine learning is DP fine-tuning: pretraining a model on a source of public data and transferring the extracted features to downstream tasks. This is an important setting because many industry deployments fine-tune publicly available feature extractors on proprietary data for downstream tasks. In this paper we propose a new linear scaling rule, a hyperparameter optimization algorithm that privately selects hyperparameters to optimize the privacy-utility tradeoff. A key insight into the design of our method is that our new linear scaling rule jointly increases the step size and number of steps as $\varepsilon$ increases. Our work is the first to obtain state-of-the-art performance on a suite of 16 benchmark tasks across computer vision and natural language processing for a wide range of $\varepsilon \in [0.01,8.0]$ while accounting for the privacy cost of hyperparameter tuning.
Supplementary Material: pdf
Submission Number: 4936
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview