Reinforcement Learning for Node Selection in Branch-and-Bound

TMLR Paper3171 Authors

12 Aug 2024 (modified: 28 Nov 2024)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: A big challenge in branch and bound lies in identifying the optimal node within the search tree from which to proceed. Current state-of-the-art selectors utilize either hand-crafted ensembles that automatically switch between naive sub-node selectors, or learned node selectors that rely on individual node data. In contrast to existing approaches that only consider isolated nodes, we propose a novel simulation technique that uses reinforcement learning (RL) while considering the entire tree state. To achieve this, we train a graph neural network that produces a probability distribution based on the path from the model's root to its ``to-be-selected'' leaves. Representing node-selection as a probability distribution allows us to train a decision-making policy using state-of-the-art RL techniques that capture both intrinsic node-quality and node-evaluation costs. Our method induces a high quality node selection policy on a set of varied and complex problem sets, despite only being trained on specially designed, synthetic travelling salesmen problem (TSP) instances. Using such a \emph{fixed pretrained} policy shows significant improvements on several benchmarks in optimality gap reductions and per-node efficiency under strict time constraints.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Xi_Lin2
Submission Number: 3171
Loading