Combinatorial optimization (CO) problems are pervasive across various domains, but their NP-hard nature often necessitates problem-specific heuristic algorithms. Recent advancements in deep learning have led to the development of learning-based heuristics, yet these approaches often struggle with limited search capabilities. We introduce Explore-and-Exploit GNN ($X^2$GNN, pronounced x-squared GNN), a novel unsupervised neural framework that combines exploration and exploitation for combinatorial search optimization: i) Exploration - $X^2$GNN generates multiple solutions simultaneously, promoting diversity in the search space; (ii) Exploitation - $X^2$GNN employs neural stochastic iterative refinement to exploit partial existing solutions, guiding the search toward promising regions and helping escape local optima. By balancing exploration and exploitation, $X^2$GNN achieves superior performance and generalization on several graph CO problems including Max Cut, Max Independent Set, and Max Clique. Notably, for large Max Clique problems, $X^2$GNN consistently generates solutions within 1.2% of optimality, while other state-of-the-art learning-based approaches struggle to reach within 22% of optimal. Moreover, $X^2$GNN consistently generates better solutions than Gurobi on large graphs for all three problems under reasonable time budgets. Furthermore, $X^2$GNN exhibits exceptional generalization capabilities. For the Maximum Independent Set problem, $X^2$GNN outperforms state-of-the-art methods even when trained on smaller or out-of-distribution graphs compared to the test set. Our framework offers a more effective and flexible approach to neural combinatorial optimization, addressing a key challenge in the field and providing a promising direction for future research in learning-based heuristics for combinatorial optimization.
Keywords: combinatorial optimization, unsupervised learning, graph neural networks
TL;DR: X^2GNN iteratively refines (exploit) a solution pool (explore) using GNN for combinatorial optimization, generalizing across problem distribution and size. % , and outperforming ML and traditional OR baselines.
Abstract:
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12302
Loading