Neural Fixed-Point Acceleration for Second-order Cone Optimization Problems

TMLR Paper334 Authors

02 Aug 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Continuous fixed-point problems are a computational primitive in numerical computing, optimization, machine learning, and the natural and social sciences, and have recently been incorporated into deep learning models as optimization layers. Acceleration of fixed-point computations has traditionally been explored in optimization research without the use of learning. In this work, we are interested in the amortized optimization scenario, where similar optimization problems need to be solved repeatedly. We introduce neural fixed-point acceleration, a framework to automatically learn to accelerate fixed-point problems that are drawn from a distribution; a key question motivating our work is to better understand the characteristics that make neural acceleration more beneficial for some problems than others. We apply the framework to solve second-order cone programs with the Splitting Conic Solver (SCS), and evaluate on distributions of Lasso problems and Kalman filtering problems. Our main results show that we are able to get a $10\times$ performance improvement in accuracy on the Kalman filtering distribution, while those on Lasso are much more modest. We then isolate a few factors that make neural acceleration much more useful for our distributions of Kalman filtering problems than the Lasso problems. We apply a number of problem and distribution modifications on a scaled-down version of the Lasso problem distribution, adding in properties that make it structurally closer to Kalman filtering, and show when the problem distribution benefits from neural acceleration. Our experiments suggest that linear dynamical systems may be a class of optimization problems that benefit from neural acceleration.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: - Added additional background definitions and explanations to improve clarity in Section 3. - Added additional descriptions of algorithms, experimental methodology, models, run-times and interpretations in Sections 4 & 5. - Added further discussion on learning-to-optimize work in Section 2. - Added a number of clarifications to address reviewer questions throughout, and paraphrased conclusion as requested.
Assigned Action Editor: ~Nadav_Cohen1
Submission Number: 334
Loading