A Stochastic Linearized Augmented Lagrangian Method for Decentralized Bilevel OptimizationDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 24 Oct 2022, 02:27NeurIPS 2022 AcceptReaders: Everyone
Keywords: Decentralized bilevel optimization, stochastic linearized augmented Lagrangian method (SLAM), multi-agent actor-critic algorithm
TL;DR: This work develops a stochastic linearized augmented Lagrangian method (SLAM) for solving general nonconvex bilevel optimization problems over a graph, where both upper and lower optimization variables are able to achieve a consensus.
Abstract: Bilevel optimization has been shown to be a powerful framework for formulating multi-task machine learning problems, e.g., reinforcement learning (RL) and meta-learning, where the decision variables are coupled in both levels of the minimization problems. In practice, the learning tasks would be located at different computing resource environments, and thus there is a need for deploying a decentralized training framework to implement multi-agent and multi-task learning. We develop a stochastic linearized augmented Lagrangian method (SLAM) for solving general nonconvex bilevel optimization problems over a graph, where both upper and lower optimization variables are able to achieve a consensus. We also establish that the theoretical convergence rate of the proposed SLAM to the Karush-Kuhn-Tucker (KKT) points of this class of problems is on the same order as the one achieved by the classical distributed stochastic gradient descent for only single-level nonconvex minimization problems. Numerical results tested on multi-agent RL problems showcase the superiority of SLAM compared with the benchmarks.
Supplementary Material: pdf
16 Replies

Loading