Reproducibility study of “Robust Counterfactual Explanations on Graph Neural Networks”Download PDF

Published: 11 Apr 2022, Last Modified: 05 May 2023RC2021Readers: Everyone
Keywords: Counterfactual, explanations, GNN, robust, graph neural networks, interpretation, explainable AI, decision logic, reproducibility
TL;DR: We were partly able to reproduce the claims of the original authors.
Abstract: Scope of Reproducibility The aim of this paper is to reproduce the claims made in the paper "Robust Counterfactual Explanations on Graph Neural Networks". The authors claim to have developed a novel method for explaining Graph Neural Networks (GNNs) which outperforms the existing explainer methods in three different ways, by being (1) more counterfactual, (2) more robust to noise and (3) efficient in terms of time. Methodology The original author's code contained the code necessary to train both GNNs and explainer models from scratch. However, some alterations made by us were necessary to be able to use it. To validate the authors' claims, the trained RCExplainer model is compared with other explainer models in terms of fidelity, robustness and efficiency. We extended the work by investigating the generalisation to the image domain and verified the authors' implementation. Results For the validation of the original paper, we compare the pre-trained model and the retrained model to the results reported in the original paper. The retrained RCExplainer outperformed the other methods on fidelity and robustness, which corresponds with the results of the original authors. The measured efficiency of the method also corresponds to the original result. To extend the paper, this comparison is also performed using a train-test split, which showed no significant difference. The implementation of the metric is investigated and concerns are raised. Finally, the method generalises well to MNISTSuperpixels in terms of fidelity, but lacks in robustness. What was easy The original paper described their metrics for comparing multiple explainer models clearly, which made it easier to reproduce. Moreover, a codebase was available which included a pre-trained explainer model and files for training the other models. Because of this, we could easily find the reason for differences between our results and those of the paper. What was difficult The most difficult part of the reproduction study was determining the functionality of the provided codebase. The original authors did provide a general README file that included instructions for all code parts. However, using these provided instructions, we were not able to run this code without changes. As the provided codebase was very extensive, it was difficult to understand and determine how the different modules worked together. Communication with original authors We found it not necessary to contact the original authors for this reproduction study.
Paper Url:
Paper Venue: NeurIPS 2021
0 Replies