Keywords: Explainable AI, Combinatorial Optimization
TL;DR: We uncover how a Graph Neural Network solving the Graph Coloring Problem learns strategies reminiscent of traditional combinatorial optimization heuristics, advancing our understanding of AI model interpretability.
Abstract: Despite advances in neural networks for solving combinatorial optimization problems using Graph Neural Networks (GNNs), understanding their learning processes and utilizing acquired knowledge remains elusive, particularly in imperfect models addressing NP-complete problems. This gap underscores the need for Explainable AI (XAI) methodologies. In this study, we undertake the task of elucidating the mechanisms of a specific model named GNN-GCP trained to solve the Graph Coloring Problem (GCP). Our findings reveal that the concepts that underpin the operation of GNN-GCP resemble those of hand-crafted combinatorial optimization heuristics. One prominent example is the concept of ``support of vertex $v$ with respect to a given coloring of the graph", which is the number of neighbors that $v$ has in each color class other than its own. By providing insights into the inner workings of GNN-GCP, we contribute to the larger goal of making AI models more interpretable and trustworthy, even in complex settings such as combinatorial optimization problems.
Submission Number: 11
Loading