Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Lagrangian Relaxation, Mixed Integer Linear Programming, Combinatorial Optimization, Graph Neural Networks
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Lagrangian relaxation stands among the most efficient approaches for solving a
Mixed Integer Linear Programs (MILP) with difficult constraints. Given any duals
for these constraints, called Lagrangian Multipliers (LMs), it returns a bound on
the optimal value of the MILP, and Lagrangian methods seek the LMs giving the
best such bound. But these methods generally rely on iterative algorithms resem-
bling gradient descent to maximize the concave piecewise linear dual function:
the computational burden grows quickly with the number of relaxed constraints.
We introduce a deep learning approach that bypasses the descent, effectively
amortizing the local, per instance, optimization. A probabilistic encoder based
on a graph convolutional network computes high-dimensional representations of
relaxed constraints in MILP instances. A decoder then turns these representations
into LMs. We train the encoder and decoder jointly by directly optimizing the
bound obtained from the predicted multipliers. Numerical experiments show that
our approach closes up to 85 % of the gap between the continuous relaxation and
the best Lagrangian bound, and provides a high quality warm-start for descent
based Lagrangian methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5480
Loading