Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy

ICLR 2026 Conference Submission12787 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Optimization, Differential Privacy, High Probability Analysis
TL;DR: We propose Clip21-SGD2M algorithm which provably converges without imposing data heterogeneity bounds
Abstract: Achieving both strong Differential Privacy (DP) and efficient optimization is critical for Federated Learning (FL), where client data must remain confidential without compromising model performance. However, existing methods typically sacrifice one for the other: they either provide robust DP guarantees at the cost of assuming bounded gradients/data heterogeneity, or they achieve strong optimization rates without any privacy protection. In this paper, we bridge this gap by introducing Clip21-SGD2M, a novel method that integrates gradient clipping, heavy-ball momentum, and error feedback to deliver state-of-the-art optimization and strong privacy guarantees. Specifically, we establish optimal convergence rates for non-convex smooth distributed problems, even in the challenging setting of heterogeneous client data, without requiring restrictive boundedness assumptions. Additionally, we demonstrate that Clip21-SGD2M achieves competitive (local-)DP guarantees, comparable to the best-known results. Numerical experiments on non-convex logistic regression and neural network training confirm the superior optimization performance of our approach across a wide range of DP noise levels, underscoring its practical value in real-world FL applications.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 12787
Loading