GLIM: Towards Generalizable Learning Representation for MILP

ICLR 2026 Conference Submission12992 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mixed-Integer Linear Programming, Embedding Model, Representation Learning
Abstract: Mixed-Integer Linear Programs (MILPs) underpin a wide range of combinatorial optimization applications, and there has been many works in the field of MILPs based on machine learning. However, existing learning-based approaches often struggle to generalize beyond narrow training distributions or specific tasks. In this paper, we introduce GLIM (Generalizable Learning Representation for MILPs), a general-purpose embedding model designed to unify learning across diverse MILP classes and downstream tasks. GLIM is trained on a large corpus of roughly 78,000 instances spanning 2,000 problem classes. Motivated by the observation that problem type and problem scale are orthogonal factors whose interaction drives empirical difficulty, GLIM learns a joint representation that disentangles type, scale, and solving complexity. Each instance is encoded as a bipartite graph and processed by a hybrid architecture that couples GNN modules with Perceiver-like blocks. We evaluate GLIM on two representative MILP tasks to probe representation quality: (i) MILP Instance Retrieval and (ii) MILP Solver Hyperparameter Prediction. Across in-distribution and distribution-shifted settings, including real-world MIPLIB benchmarks, GLIM outperforms strong baselines in most cases and exhibits robust transfer to new classes and sizes. These results indicate that a single, disentangled embedding can serve as a reusable backbone for MILP tasks, enabling broader generalization than task- or class-specific learned components.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 12992
Loading