Supplementary Material: zip
Keywords: local search, neural combinatorial optimization, imitation learning, reinforcement learning
TL;DR: We introduce neural models that predict optimal k-step moves in local search.
Abstract: Local search is a key tool in combinatorial optimization. Once an operator is set, the method starts from a random solution and iteratively moves to a candidate neighbor. Typically, the best neighbor is sought, which requires visiting all neighbors, an approach that can be computationally expensive since it requires to recalculate the fitness function. Recently, neural networks have been employed with considerable success to predict the best move in a single shot, thereby reducing computational cost. However, this short-sighted approach, like traditional local search, tends to get stuck in local optima. To address this limitation, we propose neural models capable of predicting the optimal move after $k$ local search steps, effectively learning the $k$-step trajectory that maximizes improvement in the objective function. Preliminary experiments on the Maximum Cut problem, which motivated this proposal, show that incorporating an imitation learning loss into the conventional reinforcement learning pipeline not only accelerates convergence but also achieves impressive performance, with 99\% accuracy in selecting the optimal move within 3-step neighborhoods.
Submission Number: 3
Loading