Neural Solver Selection for Combinatorial Optimization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
TL;DR: We demonstrate that neural combinatorial optimization solvers often exhibit complementary strengths and propose a general selection framework to coordinate multiple solvers for improved performance.
Abstract:

Machine learning has increasingly been employed to solve NP-hard combinatorial optimization problems, resulting in the emergence of neural solvers that demonstrate remarkable performance, even with minimal domain-specific knowledge. To date, the community has created numerous open-source neural solvers with distinct motivations and inductive biases. While considerable efforts are devoted to designing powerful single solvers, our findings reveal that existing solvers typically demonstrate complementary performance across different problem instances. This suggests that significant improvements could be achieved through effective coordination of neural solvers at the instance level. In this work, we propose the first general framework to coordinate the neural solvers, which involves feature extraction, selection model, and selection strategy, aiming to allocate each instance to the most suitable solvers. To instantiate, we collect several typical neural solvers with state-of-the-art performance as alternatives, and explore various methods for each component of the framework. We evaluated our framework on two typical problems, Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP). Experimental results show that our framework can effectively distribute instances and the resulting composite solver can achieve significantly better performance (e.g., reduce the optimality gap by 0.88% on TSPLIB and 0.71% on CVRPLIB) than the best individual neural solver with little extra time cost.

Lay Summary:

Machine learning is increasingly used to tackle complex combinatorial optimization problems, with neural solvers showing impressive results even without deep expertise. In this paper, we first reveal that different neural solvers demonstrate complementary strengths and propose a framework to coordinate multiple neural solvers by selecting the best solver for each problem instance. Testing on two optimization problems, our approach significantly outperforms individual solvers while adding minimal extra runtime, paving the way for more efficient solutions to real-world challenges like logistics, transportation, and resource allocation.

Primary Area: Optimization->Discrete and Combinatorial Optimization
Keywords: neural combinatorial optimization
Submission Number: 9356
Loading