Keywords: combinatorial optimization, graph neural networks, unsupervised learning, simulated annealing
Abstract: The hardness of combinatorial optimization (CO) problems hinders collecting solutions for supervised learning. However, learning neural networks for CO problems is notoriously difficult in lack of the labeled data as the training is easily trapped at local optima. In this work, we propose a simple but effective annealed training framework for CO problems. In particular, we transform CO problems into the smoothest unbiased energy-based models (EBMs) by adding carefully selected penalties, then train graph neural networks to approximate the EBMs. We prevent the training from being stuck at local optima near the initialization by introducing an annealed loss function.
An experimental evaluation demonstrates that our annealed training framework obtains substantial improvements. In four types of CO problems, our method achieves performance substantially better than other unsupervised neural methods on both synthetic and real-world graphs.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Probabilistic Methods (eg, variational inference, causal inference, Gaussian processes)
TL;DR: A simple but effective annealed training framework for unsupervised learning of combinatorial optimization problems over graphs
Supplementary Material: zip
4 Replies
Loading