Abstract: We consider centralized distributed algorithms for general stochastic convex optimization problems which we approximate by a finite-sum minimization problem with summands distributed among computational nodes. We exploit statistical similarity between the summands and the whole sum to construct a distributed accelerated cubic-regularized Newton’s method that achieves lower communication complexity bound for this setting and improves upon existing upper bound. Further, we use this algorithm to obtain convergence rate bounds for the original stochastic optimization problem and compare our bounds with the existing ones in several regimes when the goal is to minimize the number of communication rounds and improve the parallelization when increasing the number of workers.
0 Replies
Loading