Zero-Order One-Point Estimate with Distributed Stochastic Gradient Techniques

TMLR Paper2328 Authors

04 Mar 2024 (modified: 27 Jun 2024)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: In this work, we consider a distributed multi-agent stochastic optimization problem, where each agent holds a local objective function that is smooth and strongly convex and that is subject to a stochastic process. The goal is for all agents to collaborate to find a common solution that optimizes the sum of these local functions. With the practical assumption that agents can only obtain noisy numerical function queries at precisely one point at a time, we extend the distributed stochastic gradient-tracking (DSGT) method to the bandit setting where we do not have access to the gradient, and we introduce a zero-order (ZO) one-point estimate (1P-DSGT). We then consider another consensus-based distributed stochastic gradient (DSG) method under the same setting and introduce the same estimate (1P-DSG). We analyze the convergence of these novel techniques for smooth and strongly convex objectives using stochastic approximation tools, and we prove that they \textit{converge almost surely to the optimum} despite the biasedness of our gradient estimate. We then study the convergence rate of our methods. With constant step sizes, our methods compete with their first-order (FO) counterparts by achieving a linear rate $O(\varrho^k)$ as a function of number of iterations $k$. To the best of our knowledge, this is the first work that proves this rate in the noisy estimation setting or with one-point estimators. With vanishing step sizes, we establish a rate of $O(\frac{1}{\sqrt{k}})$ after a sufficient number of iterations $k > K_0$. This is the optimal rate proven in the literature for centralized techniques utilizing one-point estimators. We then provide a regret bound of $O(\sqrt{k})$ with vanishing step sizes. We further illustrate the usefulness of the proposed techniques using numerical experiments.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Yunwen_Lei1
Submission Number: 2328
Loading