A Bayesian Framework for Online Nonconvex Optimization over Distributed Processing NetworksDownload PDFOpen Website

Published: 2023, Last Modified: 29 Sept 2023INFOCOM 2023Readers: Everyone
Abstract: In many applications such as statistical machine learning, reinforcement learning, and optimization for large data centers, the increasing data size and model complexity have made it impractical to run optimizations over a single machine. Therefore, solving the distributed optimization problem has become an important task. In this work, we consider a distributed processing network $G = \left( {\mathcal{V},\mathcal{E}} \right)$ with n nodes, where each node i can only evaluate the values of a local function (i.e., has zeroth-order information) and can only communicate with its neighbors. The objective is to reach consensus on the global optimizer of ${\max _{x \in \mathcal{X}}}\frac{1}{n}\sum\nolimits_{i = 1}^n {{f_i}(x)} $. Previous methods either assume first-order gradient information which is not suitable for many model-free learning scenarios, or consider the zeroth-order information but assume convexity of the objective functions and can only guarantee convergence to a stationary point for nonconvex objectives. To address these limitations, we drop both the known gradient assumption and convexity assumption. Instead, we propose a distributed Bayesian framework for the problem with only zeroth-order information and general nonconvex objective functions in a Matérn Reproducing Kernel Hilbert Space (RKHS). Under this framework, we propose an algorithm and show that with high probability it reaches consensus on all nodes and has a sublinear regret with regard to the global optimal. The results are validated under numerical studies.
0 Replies

Loading