DP-C4: Eliminating Solution Bias in Differentially Private Optimization via Coupled Clipping with Adaptive Thresholds
Keywords: Differentially private, stochastic optimization algorithm, privacy-utility trade-off
Abstract: Differentially private (DP) stochastic optimization algorithms are widely used in privacy-preserving deep learning, where per-sample gradient clipping and noise injection protect sensitive information. However, these operations limit existing DP methods to converge within a constant-radius neighborhood of the first-order stationary point, leading to solution bias and the well-known privacy-utility trade-off. To enhance model utility, we propose a novel framework called DP-C4, which is designed to be error-Consistently-decayed, Coupledly-clipped, solution-Calibrated, and Convergence-guaranteed; this is the first time such a method is proposed. Specifically, it incorporates a carefully designed coupled clipping strategy and adaptive clipping thresholds, ensuring that both clipping bias and noise variance asymptotically vanish, thereby correcting the DP-induced solution bias. Furthermore, we develop a memory-efficient variant that reduces storage complexity without compromising privacy guarantees. We prove that our method converges to the optimum in strongly convex case by properly constructing a Lyapunov function, and to a diminishing neighborhood of the first-order stationary point in nonconvex case. Our theoretical results are supported by numerical experiments.
Primary Area: optimization
Submission Number: 10155
Loading