Solve Smart, Not Often: Policy Learning for Costly MILP Re-solving

ICLR 2026 Conference Submission12781 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Real-time operation, Policy learning, Mixed-Integer Linear Programming
Abstract: A common challenge in real-time operations is deciding whether to re-solve an optimization problem or continue using an existing solution. While modern data platforms may collect information at high frequencies, many real-time operations require repeatedly solving computationally intensive optimization problems formulated as Mixed-Integer Linear Programs (MILPs). Determining when to re-solve is, therefore, an economically important question. This problem poses several challenges: 1) How to characterize solution optimality and solving cost; 2) How to detect environmental changes and select beneficial samples for solving the MILP; 3) Given the large time horizon and non-MDP structure, vanilla reinforcement learning (RL) methods are not directly applicable and tend to suffer from value function explosion. Existing literature largely focuses on heuristics, low-data settings, and smooth objectives, with little focus on common NP-hard MILPs. We propose a framework called Proximal $\underline{\text{P}}$olicy $\underline{\text{O}}$ptimization with $\underline{\text{C}}$hange Point Detection (POC), which systematically offers a solution for balancing performance and cost when deciding appropriate re-solving times. Theoretically, we establish the relationship between the number of re-solves and the re-solving cost. To test our framework, we assemble eight synthetic and real-world datasets, and show that POC consistently outperforms existing baselines by 2\%-17\%. As a side benefit, our work fills the gap in the literature by introducing real-time MILP benchmarks and evaluation criteria.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 12781
Loading