On the Inherent Privacy of Two Point Zeroth Order Projected Gradient Descent

Published: 10 Oct 2024, Last Modified: 07 Dec 2024NeurIPS 2024 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Differential Privacy, Zeroth Order Optimization
Abstract: Differentially private zeroth-order optimization methods have recently gained popularity in private fine tuning of machine learning models due to their favorable empirical performance and reduced memory requirements. Current approaches for privatizing zeroth-order methods rely on adding Gaussian noise to the estimated zeroth-order gradients. However, because the search direction in these methods is inherently random, researchers including Tang et al. and Zhang et al. have raised an important fundamental question: is the inherent noise in zeroth-order estimators sufficient to ensure the overall differential privacy of the algorithm? This work settles this fundamental question for a class of oracle-based optimization algorithms where the oracle returns zeroth-order gradient estimates. In particular, we show that for a fixed initialization, there exist strongly convex objective functions such that running Projected Zeroth-Order Gradient Descent (ZO-GD) is not differentially private. Moreover, we show that, even with random initialization, the privacy loss of ZO-GD increases superlinearly with the number of iterations when minimizing convex objective functions.
Submission Number: 88
Loading