Unlock the Black Box by Interpreting Graph Convolutional Networks via Additive Decomposition

TMLR Paper1186 Authors

23 May 2023 (modified: 28 Aug 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: The vast adoption of graph neural networks (GNNs) in broad applications calls for versatile interpretability tools so that a better understanding of the GNNs' intrinsic structures can be gained. We propose an interpretable GNN framework to decompose the prediction into the additive combination of node features' main effects and the contributions of edges. The key component of our framework is the generalized additive model with the graph convolutional network (GAM-GCN) that allows for global node feature interpretations. The inherent interpretability of GAM and the expressive power of GCN are preserved and naturally connected. Further, the effects of neighboring edges are measured by edge perturbation and surrogate linear modeling, and the most important subgraph can be selected. We evaluate the proposed approach using extensive experiments and show that it is a promising tool for interpreting GNNs in the attempt to unlock the black box.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Nadav_Cohen1
Submission Number: 1186
Loading