Private Federated Learning with Provable Convergence via Smoothed Normalization

Published: 10 Jun 2025, Last Modified: 29 Jun 2025CFAgentic @ ICML'25 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated learning, private optimization, clipping, smoothed normalization, error feedback
TL;DR: We design and analyze the first differentially private distributed optimization method with provable convergence guarantees.
Abstract: Federated learning enables training machine learning models while preserving the privacy of participants. Surprisingly, there is no differentially private distributed method for smooth, non-convex optimization problems. The reason is that standard privacy techniques require bounding the participants' contributions, usually enforced via clipping of the updates. Existing literature typically ignores the effect of clipping by assuming the boundedness of gradient norms or analyzes distributed algorithms with clipping, but ignores DP constraints. In this work, we study an alternative approach via *smoothed normalization* of the updates, motivated by its favorable performance in the single-node setting. By integrating smoothed normalization with an Error Compensation mechanism, we design a new distributed algorithm $\alpha$-NormEC. We prove that our method achieves a superior convergence rate over prior works. By extending $\alpha$-NormEC to the DP setting, we obtain the first differentially private distributed optimization algorithm with provable convergence guarantees. Finally, our empirical results from neural network training indicate robust convergence of $\alpha$-NormEC across different parameter settings.
Submission Number: 11
Loading