Keywords: Probabilistic Inference, Diffusion Models, Combinatorial Optimization, Vehicle Routing Problem
Abstract: Combinatorial Optimization problems are widespread in domains such as logistics, manufacturing, and drug discovery, yet their NP-hard nature makes them computationally challenging. Recent Neural Combinatorial Optimization (NCO) methods leverage deep learning to learn policies for constructing solutions, trained via Supervised or Reinforcement Learning. While promising, these approaches often rely on task-specific augmentations, perform poorly on out-of-distribution instances, and lack robust inference mechanisms. Moreover, existing latent space models either require labeled data or use an instance-independent latent distribution.
In this work, we introduce LGS-Net, an instance-conditioned latent space model that enables sampling-based inference beyond standard decoding methods. Leveraging this representation, we propose two diffusion-driven inference schemes: a diffusion-prior Markov Chain Monte Carlo and a diffusion-guided Sequential Monte Carlo, both coupled with Stochastic Approximation for test-time adaptation toward low-cost solutions. Empirical results on benchmark routing tasks show that our method achieves state-of-the-art performance among NCO baselines.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 43
Loading