Abstract: Over the past few years, the federated learning (FL) community has witnessed a proliferation of new FL algorithms. However, our understating of the theory of FL is still fragmented, and a thorough, formal comparison of these algorithms remains elusive. Motivated by this gap, we show that many of the existing FL algorithms can be understood from an operator splitting point of view. This unification allows us to compare different algorithms with ease, refine previous convergence results and uncover new algorithmic variants. In particular, our analysis reveals the vital role played by the step size in FL algorithms. We perform numerical experiments on both convex and nonconvex models to validate our findings.
0 Replies
Loading