Abstract: Recently, there has been much work on designing general heuristics for graph-based, combinatorial optimization problems via the incorporation of Graph Neural Networks (GNNs) to learn distribution-specific solution structures. However, there is a lack of consistency in evaluating these heuristics in terms of the baselines and instances chosen, making it difficult to assess the relative performance of the algorithms. In this paper, we introduce \textbf{MaxCutBench}—an open-source benchmark suite dedicated to the NP-hard Maximum Cut problem. The suite offers a unified interface for $16$ algorithms, both traditional and machine-learning-based. Using our benchmark, we conduct an in-depth analysis of the implemented algorithms on a carefully selected set of hard instances from diverse graph datasets. Our main finding is that classical local search heuristics can outperform several highly cited learning-based approaches, including S2V-DQN (Khalil et al., 2017), ECO-DQN (Barrett et al., 2020), among others, in terms of objective value, generalization, inference time, and scalability. Additionally, we find that the performance of ECO-DQN either remains the same or improves when the GNN is replaced by simple linear regression. We hope our benchmark will contribute to the efforts of the community to standardize the evaluation of learned heuristics for combinatorial optimization. Code, data, and pre-trained models are available at: \url{https://github.com/ankurnath/MaxCut-Bench}.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Xi_Lin2
Submission Number: 4141
Loading