Towards Optimal Communication Complexity in Distributed Non-Convex OptimizationDownload PDF

Published: 31 Oct 2022, Last Modified: 15 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Distributed Optimization, Intermittent Communication Setting, Federated Learning, Non-convex Optimization, Lower Bounds, Stochastic Optimization
TL;DR: We propose a new algorithm for distributed non-convex optimization and show that it is optimal in relevant regimes.
Abstract: We study the problem of distributed stochastic non-convex optimization with intermittent communication. We consider the full participation setting where $M$ machines work in parallel over $R$ communication rounds and the partial participation setting where $M$ machines are sampled independently every round from some meta-distribution over machines. We propose and analyze a new algorithm that improves existing methods by requiring fewer and lighter variance reduction operations. We also present lower bounds, showing our algorithm is either $\textit{optimal}$ or $\textit{almost optimal}$ in most settings. Numerical experiments demonstrate the superior performance of our algorithm.
Supplementary Material: zip
26 Replies

Loading