Keywords: Combinatorial Optimization, Graph Neural Networks, Unsupervised Learning
TL;DR: We show that graph neural networks can efficiently implement a message passing algorithm that is optimal (under plausible assumptions) for a broad class of combinatorial problems and demonstrate that this leads to empirically powerful architectures.
Abstract: In this work we design graph neural network architectures that capture optimal
approximation algorithms for a large class of combinatorial optimization problems,
using powerful algorithmic tools from semidefinite programming (SDP). Concretely, we prove that polynomial-sized message-passing algorithms can represent
the most powerful polynomial time algorithms for Max Constraint Satisfaction
Problems assuming the Unique Games Conjecture. We leverage this result to
construct efficient graph neural network architectures, OptGNN, that obtain high quality approximate solutions on landmark combinatorial optimization problems
such as Max-Cut, Min-Vertex-Cover, and Max-3-SAT. Our approach achieves
strong empirical results across a wide range of real-world and synthetic datasets
against solvers and neural baselines. Finally, we take advantage of OptGNN’s
ability to capture convex relaxations to design an algorithm for producing bounds
on the optimal solution from the learned embeddings of OptGNN.
Primary Area: Graph neural networks
Submission Number: 13057
Loading