[RE] GNNBoundary: Towards Explaining Graph Neural Networks through the Lens of Decision Boundaries

Published: 07 May 2025, Last Modified: 07 May 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Graph Neural Networks (GNNs) can model complex relationships while posing significant interpretability challenges due to the unique and varying properties of graph structures, which hinder the adaptation of existing methods from other domains. To address interpretability challenges in GNNs, GNNBoundary was designed as a model-level explainability tool to provide insights into their overall behavior. This paper aims to thoroughly evaluate the reproducibility, robustness, and practical applicability of the findings presented in the original work by replicating and extending their experiments, highlighting both strengths and limitations while considering potential future improvements. Our results show that while the algorithm can reliably generate near-boundary graphs in certain settings, its performance is highly sensitive to hyperparameter choices and suffers from convergence issues. Furthermore, we find that the generated solutions lack diversity, often representing only a single region on the decision boundary, which limits their effectiveness in broader decision boundary analysis. All the code used throughout the research is publicly available on GitHub.
Certifications: Reproducibility Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/leonardhorns/GNNBoundary
Assigned Action Editor: ~Simone_Scardapane1
Submission Number: 4297
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview