Lagrangian Proximal Gradient Descent for Learning Convex Optimization Models

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: hybrid architectures, neurosymbolic architectures, bilevel optimization, optimization layer, discrete optimization, proximal gradient descent, optimization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We propose Lagrangian Proximal Gradient Descent, a flexible framework for learning convex optimization models that unifies various contemporary methods with traditional optimization methods.
Abstract: We propose Lagrangian Proximal Gradient Descent (LPGD), a flexible framework for learning convex optimization models. Similar to traditional proximal gradient methods, LPGD can be interpreted as optimizing a smoothed envelope of the possibly non-differentiable loss. The smoothening allows training models that do not provide informative gradients, such as discrete optimization models. We show that the LPGD update can be efficiently computed by rerunning the forward solver on a perturbed input, capturing various previously proposed methods as special cases. Moreover, we prove that the LPGD update converges to the true gradient as the smoothening parameter approaches zero. Finally, we experimentally investigate the benefits of applying LPGD even in a fully differentiable setting.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5435
Loading