[Re] Explaining Temporal Graph Models through an Explorer-Navigator Framework

TMLR Paper2211 Authors

15 Feb 2024 (modified: 21 Apr 2024)Decision pending for TMLREveryoneRevisionsBibTeX
Abstract: Temporal graphs model complex dynamic relations that change over time, and are being used in a growing number of applications. In recent years, several graph neural networks (GNNs) were proposed, designed specifically for this temporal setting (Xu et al., 2020; Rossi et al., 2020). However, these models are notoriously hard to interpret. For this reason, the original authors (Xia et al., 2023) propose the Temporal GNN Explainer (T-GNNExplainer) – an explorer-navigator framework to efficiently compute sparse explanations of target Temporal GNNs. We reproduce the main findings of the original paper, extend their work by proposing a different type of navigator method, and examine in detail the explanation capabilities and efficiency of the provided framework within various model and hyperparameter settings. We confirm that their explainer outperforms the other baselines across nearly all datasets and metrics. Our findings suggest the navigator helps bias the search process, as well as that T-GNNExplainer can find an exact influential event set. Moreover, we examine the effect of different navigator methods and quantify the runtime-fidelity tradeoff controlled by two hyper-parameters.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Addressed several issues highlighted by the reviewers - Added a paragraph to the introduction mentioning a more recent work [Chen & Ying, NeurIPS2023] - Added a paragraph to the introduction discussing the motivation behind this work - Added a new section (3.2) to introduce the TGAT block ([Kazemi et al. JMLR2020]) and provide the necessary calculus to allow more rigorous definitions for the navigators and two baselines. - Added a schematic diagram that shows the MLPNavigator - Revised sections 3.4 and 3.5 by providing equations for several methods we introduce in them. - Moved two sections, regarding the hardware and the hyper-parameters we used to Appendix A.1 and A.2 from the main body of text - Improved Section 6.1, clarifying the limitations of the proposed explanation method and once again highlighting the work of [Chen & Ying, NeurIPS2023], which is superior to the T-GNNExplainer.
Assigned Action Editor: ~Yujia_Li1
Submission Number: 2211
Loading