Abstract: Graph Neural Networks (GNNs) have shown powerful performance on various graph-related tasks. GNNs learn better knowledge representations by aggregating the features of neighboring nodes. However, the black-box representations of deep learning models make it difficult for people to understand GNN’s inherent operation mechanism. To this end, in this paper, we propose a model-agnostic method called GNN Prediction Interpreter (GPI) to explain node features’ effect on the prediction of GNN. Particularly, GPI first quantifies the correlation between node features and GNN’s prediction, and then identifies the subset of node features that have an essential impact on GNN’s prediction according to the quantitative results. Experiments demonstrate that GPI can provide better explanations than state-of-the-art methods.
0 Replies
Loading