T2T: From Distribution Learning in Training to Gradient Search in Testing for Combinatorial Optimization

Published: 21 Sept 2023, Last Modified: 02 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Machine Learning, Combinatorial Optimization, Generative Modeling, Diffusion Model
TL;DR: This paper proposes a training to testing framework for solving combinatorial optimization problems, which learns solution distribution during training and then conducts a gradient-based search within the solution space during testing.
Abstract: Extensive experiments have gradually revealed the potential performance bottleneck of modeling Combinatorial Optimization (CO) solving as neural solution prediction tasks. The neural networks, in their pursuit of minimizing the average objective score across the distribution of historical problem instances, diverge from the core target of CO of seeking optimal solutions for every test instance. This calls for an effective search on each problem instance, while the model should serve to provide supporting knowledge that benefits the search. To this end, we propose T2T (Training to Testing) framework that first leverages the generative modeling to estimate the high-quality solution distribution for each instance during training, and then conducts a gradient-based search within the solution space during testing. The proposed neural search paradigm consistently leverages generative modeling, specifically diffusion, for graduated solution improvement. It disrupts the local structure of the given solution by introducing noise and reconstructs a lower-cost solution guided by the optimization objective. Experimental results on Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS) show the significant superiority of T2T, demonstrating an average performance gain of 49.15% for TSP solving and 17.27% for MIS solving compared to the previous state-of-the-art.
Submission Number: 4127
Loading