Universal Safeguarded Learned Convex Optimization with Guaranteed ConvergenceDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We provide the first general framework, with convergence guarantees, for applying learning to optimize schemes to any convex optimization problem.
Abstract: Many applications require quickly and repeatedly solving a certain type of optimization problem, each time with new (but similar) data. However, state of the art general-purpose optimization methods may converge too slowly for real-time use. This shortcoming is addressed by “learning to optimize” (L2O) schemes, which construct neural networks from parameterized forms of the update operations of general-purpose methods. Inferences by each network form solution estimates, and networks are trained to optimize these estimates for a particular distribution of data. This results in task-specific algorithms (e.g., LISTA, ALISTA, and D-LADMM) that can converge order(s) of magnitude faster than general-purpose counterparts. We provide the first general L2O convergence theory by wrapping all L2O schemes for convex optimization within a single framework. Existing L2O schemes form special cases, and we give a practical guide for applying our L2O framework to other problems. Using safeguarding, our theory proves, as the number of network layers increases, the distance between inferences and the solution set goes to zero, i.e., each cluster point is a solution. Our numerical examples demonstrate the efficacy of our approach for both existing and new L2O methods.
Keywords: L2O, learn to optimize, fixed point, machine learning, neural network, ADMM, LADMM, ALISTA, D-LADMM
Original Pdf: pdf
11 Replies

Loading