Keywords: Combinatorial Optimization, Combinatorial Optimisation, Gradient Estimation, Preference Learning, Graph Neural Networks, Travelling Salesman Problem, Traveling Salesman Problem, TSP, Minimum k-Cut, Machine Learning
TL;DR: We solve combinatorial optimization problems using a combination of existing approximation algorithms and GNNs, trained self-supervised using our novel gradient estimation scheme PBGE.
Abstract: Combinatorial optimization (CO) problems arise across a broad spectrum of domains. While exact solutions are often computationally infeasible, many practical applications require high-quality solutions within a given time budget. To address this, we propose a learning-based approach that enhances existing non-learned heuristics for CO. Specifically, we parameterize these heuristics and train graph neural networks (GNNs) to predict parameter values that yield near-optimal solutions. Our method is trained end-to-end in a self-supervised fashion, using a novel gradient estimation scheme that treats the heuristic as a black box. This approach combines the strengths of learning and traditional algorithms: the GNN learns from data to guide the algorithm toward better solutions, while the heuristic ensures feasibility. We validate our method on two well-known CO problems: the travelling salesman problem and the minimum k-cut problem.
Submission Number: 8
Loading