Faster Convergence of Local SGD for Over-Parameterized Models

Published: 30 Mar 2024, Last Modified: 30 Mar 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Modern machine learning architectures are often highly expressive. They are usually over-parameterized and can interpolate the data by driving the empirical loss close to zero. We analyze the convergence of Local SGD (or FedAvg) for such over-parameterized models in the heterogeneous data setting and improve upon the existing literature by establishing the following convergence rates. For general convex loss functions, we establish an error bound of $\mathcal {O}(1/T)$ under a mild data similarity assumption and an error bound of $\mathcal {O}(K/T)$ otherwise, where $K$ is the number of local steps and $T$ is the total number of iterations. For non-convex loss functions we prove an error bound of $\mathcal {O}(K/T)$. These bounds improve upon the best previous bound of $\mathcal {O}(1/\sqrt{nT})$ in both cases, where $n$ is the number of agents, when no assumption on the model being over-parameterized is made. We complete our results by providing problem instances in which our established convergence rates are tight to a constant factor with a reasonably small stepsize. Finally, we validate our theoretical results by performing large-scale numerical experiments that reveal the convergence behavior of Local SGD for practical over-parameterized deep learning models, in which the $\mathcal {O}(1/T)$ convergence rate of Local SGD is clearly shown.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Yunwen_Lei1
Submission Number: 1762
Loading