Unlocking the Potential of Weighting Methods in Federated Learning Through Communication Compression
Keywords: Convex optimization, Compression, Stochastic optimization
Abstract: Modern machine learning problems are frequently formulated in federated learning domain and incorporate inherently heterogeneous data. Weighting methods operate efficiently in terms of iteration complexity and represent a common direction in this setting. At the same time, they do not address directly the main obstacle in federated and distributed learning -- communication bottleneck. We tackle this issue by incorporating compression into the weighting scheme. We establish the convergence under a convexity assumption, considering both exact and stochastic oracles. Finally, we evaluate the practical performance of the proposed method on real-world problems.
Supplementary Material: pdf
Primary Area: optimization
Submission Number: 25365
Loading