Smoothed Normalization for Efficient Distributed Private Optimization

16 Jan 2025 (modified: 18 Jun 2025)Submitted to ICML 2025EveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We design the first differentially private distributed optimization method with provable convergence guarantees.
Abstract: Federated learning enables training machine learning models while preserving the privacy of participants. Surprisingly, there is no differentially private distributed method for smooth non-convex optimization problems. The reason is that standard privacy techniques require bounding the participants' contributions, usually enforced via *clipping* of the updates. Existing literature typically ignores the effect of clipping by assuming the boundedness of gradient norms or analyzes distributed algorithms with clipping but ignores DP constraints. In this work, we study an alternative approach via *smoothed normalization* of the updates motivated by its favorable performance in the centralized setting. By integrating smoothed normalization with an error-feedback mechanism, we design a new distributed algorithm $\alpha$-${\sf NormEC}$. We prove that our method achieves a superior convergence rate over prior works. By extending $\alpha$-${\sf NormEC}$ to the DP setting, we obtain the first differentially private distributed optimization algorithm with provable convergence guarantees. Finally, we support our theoretical findings with experiments on practical machine learning problems.
Primary Area: Optimization->Large Scale, Parallel and Distributed
Keywords: private optimization, distributed learning, clipping, smoothed normalization, error feedback
Submission Number: 2135
Loading