On Glocal Explainability of Graph Neural NetworksOpen Website

2022 (modified: 12 Nov 2022)DASFAA (1) 2022Readers: Everyone
Abstract: Graph Neural Networks (GNNs) derive outstanding performance in many graph-based tasks, as the model becomes more and more popular, explanation techniques are desired to tackle its black-box nature. While the mainstream of existing methods studies instance-level explanations, we propose Glocal-Explainer to generate model-level explanations, which consumes local information of substructures in the input graph to pursue global explainability. Specifically, we investigate faithfulness and generality of each explanation candidate. In the literature, fidelity and infidelity are widely considered to measure faithfulness, yet the two metrics may not align with each other, and have not yet been incorporated together in any explanation technique. On the contrary, generality, which measures how many instances share the same explanation structure, is not yet explored due to the computational cost in frequent subgraph mining. We introduce adapted subgraph mining technique to measure generality as well as faithfulness during explanation candidate generation. Furthermore, we formally define the glocal explanation generation problem and map it to the classic weighted set cover problem. A greedy algorithm is employed to find the solution. Experiments on both synthetic and real-world datasets show that our method produces meaningful and trustworthy explanations with decent quantitative evaluation results.
0 Replies

Loading