Keywords: federated learning, distributed learning, decentralized learning, machine learning, optimization
TL;DR: In this work we ana- lyze momentum under cyclic client participation, and theoretically prove that it remains inevitably affected by statistical heterogeneity
Abstract: Recent works have explored the use of momentum in local methods to enhance distributed SGD.
This is particularly appealing in Federated Learning (FL), where momentum intuitively appears as a solution to mitigate the effects of statistical heterogeneity.
Despite recent progress in this direction, it is still unclear if momentum can guarantee convergence under unbounded heterogeneity in decentralized scenarios, where only some workers participate at each round.
In this work we analyze momentum under cyclic client participation, and theoretically prove that it remains inevitably affected by statistical heterogeneity. Similarly to SGD, we prove that decreasing step-sizes do not help either: in fact, any schedule decreasing faster than $\Theta\left({1}/{t}\right)$ leads to convergence to a constant value that depends on the initialization and the heterogeneity bound.
Numerical results corroborate the theory, and deep learning experiments confirm its relevance for realistic settings.
Submission Number: 29
Loading