Distributed Online and Bandit Convex OptimizationDownload PDF

Published: 23 Nov 2022, Last Modified: 05 May 2023OPT 2022 PosterReaders: Everyone
Keywords: Distributed Optimization, Intermittent Communication Setting, Federated Learning, Online Optimization, Adaptive Adversary
TL;DR: We show that collaboration leads to provable speed-up in distributed bandit convex optimization but is not useful for distributed online convex optimization.
Abstract: We study the problems of distributed online and bandit convex optimization against an adaptive adversary. Our goal is to minimize the average regret on M machines working in parallel over T rounds that can communicate R times intermittently. Assuming the underlying cost functions are convex, our results show collaboration is not beneficial if the machines have access to the first-order gradient information at the queried points. We show that in this setting, simple non-collaborative algorithms are min-max optimal, as opposed to the case for stochastic functions, where each machine samples the cost functions from a fixed distribution. Next, we consider the more challenging setting of federated optimization with bandit (zeroth-order) feedback, where the machines can only access values of the cost functions at the queried points. The key finding here is to identify the high-dimensional regime where collaboration is beneficial and may even lead to a linear speedup in the number of machines. Our results are the first attempts towards bridging the gap between distributed online optimization against stochastic and adaptive adversaries.
0 Replies

Loading