Differentiating Through Integer Linear Programs with Quadratic Regularization and Davis-Yin Splitting
Abstract: In many applications, a combinatorial problem must be repeatedly solved with similar, but distinct parameters. Yet, the parameters $w$ are not directly observed; only contextual data $d$ that correlates with $w$ is available. It is tempting to use a neural network to predict $w$ given $d$. However, training such a model requires reconciling the discrete nature of combinatorial optimization with the gradient-based frameworks used to train neural networks. We study the case where the problem in question is an Integer Linear Program (ILP). We propose applying a three-operator splitting technique, also known as Davis-Yin splitting (DYS), to the quadratically regularized continuous relaxation of the ILP. We prove that the resulting scheme is compatible with the recently introduced Jacobian-free backpropagation (JFB). Our experiments on two representative ILPs: the shortest path problem and the knapsack problem, demonstrate that this combination---DYS on the forward pass, JFB on the backward pass---yields a scheme which scales more effectively to high-dimensional problems than existing schemes.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/mines-opt-ml/fpo-dys
Supplementary Material: zip
Assigned Action Editor: ~Jasper_C.H._Lee1
Submission Number: 2536
Loading