Rethinking Local Branching: A Reinforcement Learning Approach to Neighborhood Control

ICLR 2026 Conference Submission18820 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mixed-Integer Linear Programming; Local Branching; Reinforcement Learning; Neighborhood Search
Abstract: For Mixed-Integer Linear Programming (MILP), the Local Branching (LB) heuristic is a well-established local search technique. However, its performance is highly sensitive to the neighborhood size—a parameter known to be instance-dependent. While recent learning-based methods aim to predict this numerical parameter, they often require extensive offline training data. This work introduces a novel approach that reframes neighborhood control in LB. Instead of predicting a size parameter, we learn a policy to select a subset of variables to which the LB constraint is applied. Our framework operates in two stages: first, we model the MILP instance as a graph and apply community detection to partition variables into structurally meaningful clusters, which serve as candidate neighborhoods. Second, a reinforcement learning (RL) agent dynamically selects the number of clusters to explore per iteration. Variables within chosen clusters are subjected to the LB constraint, while others are temporarily fixed. This results in an adaptive LB scheme where neighborhoods are defined by structural properties and dynamically scoped via RL—rather than by a single numerical parameter. Computational experiments demonstrate that our method automates neighborhood design without prior data collection. Evaluations across diverse MIP problems show that the proposed framework consistently outperforms state-of-the-art learning-based LB models and the open-source solver SCIP.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 18820
Loading