Keywords: Rescience c, Rescience x, Explainable AI, Graph Neural Networks, SubgraphX, Python
TL;DR: Reproduction of the algorithm 'SubgraphX' originating in the paper 'On Explainability of Graph Neural Networks via Subgraph Exploration'.
Abstract: Yuan et al. claim their proposed method SubgraphX achieves (i) higher fidelity in explaining models for graph- and node classification tasks compared to other explanation techniques, namely GNNExplainer. Additionally, (ii) the computational effort of SubgraphX is at a 'reasonable level', which is not further specified by the original authors. We define this as at most ten times slower than GNNExplainer.
We reimplemented the proposed algorithm in PyTorch. Then, we replicated the experiments performed by the authors on a smaller scale due to resource constraints. Additionally, we checked the performance on a new dataset and investigated the influence of hyperparameters. Lastly, we improved SubgraphX using greedy initialization and utilizing fidelity as a score function.
We were able to reproduce the main claims on the MUTAG dataset, where SubgraphX has a better performance than GNNExplainer. Furthermore, SubgraphX has a reasonable runtime of about seven times longer than GNNExplainer. We successfully employed SubgraphX on the Karate Club dataset, where it outperforms GNNExplainer as well. The hyperparameter study revealed that the number of Monte-Carlo Tree search iterations and Monte-Carlo sampling steps are the most important hyperparameters and directly trade performance for runtime. Lastly, we show that our proposed improvements to SubgraphX significantly enhance fidelity and runtime.
The authors' description of the algorithm was clear and concise. The original implementation is available in the DIG-library as a reference to our implementation.
The authors performed extensive experiments, which we could not replicate in their full scale due to resource constraints. However, we were able to achieve similar results on a subset of the datasets used. Another issue was that despite the original code of the authors and datasets being publicly available, there were many compatibility issues.
The original authors briefly reviewed our work and agreed with the findings.
Paper Url: https://proceedings.mlr.press/v139/yuan21c
Paper Venue: Other venue (not in list)
Venue Name: ICML 2021
Confirmation: The report pdf is generated from the provided camera ready Google Colab script, The report metadata is verified from the camera ready Google Colab script, The report contains correct author information., The report contains link to code and SWH metadata., The report follows the ReScience latex style guides as in the Reproducibility Report Template (https://paperswithcode.com/rc2022/registration)., The report contains the Reproducibility Summary in the first page., The latex .zip file is verified from the camera ready Google Colab script
Latex: zip
Journal: ReScience Volume 9 Issue 2 Article 41
Doi: https://www.doi.org/10.5281/zenodo.8173753
Code: https://archive.softwareheritage.org/swh:1:dir:439719e0ad99cbd3d980619c24dec1744b408dd0
0 Replies
Loading