Explaining Graph Neural Networks for Node Similarity on Graphs

TMLR Paper6684 Authors

27 Nov 2025 (modified: 08 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Similarity search is a fundamental task for exploiting information in various applications dealing with graph data, such as citation networks or knowledge graphs. Prior work on the explainability of graph neural networks (GNNs) has focused on supervised tasks, such as node classification and link prediction. However, the challenge of explaining similarities between node embeddings has been left unaddressed. We take a step towards filling this gap by formulating the problem, identifying desirable properties of explanations of similarity, and proposing intervention-based metrics that qualitatively assess them. Using our framework, we evaluate the performance of representative methods for explaining GNNs, based on the concepts of mutual information (MI) and gradient-based (GB) explanations. We find that unlike MI explanations, GB explanations have three desirable properties. First, they are *actionable*: selecting particular inputs results in predictable changes in similarity scores of corresponding nodes. Second, they are *consistent*: the effect of selecting certain inputs hardly overlaps with the effect of discarding them. Third, they can be pruned significantly to obtain *sparse* explanations that retain the effect on similarity scores. These important findings highlight the utility of our metrics as a framework for evaluating the quality of explanations of node similarities in GNNs.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Giannis_Nikolentzos1
Submission Number: 6684
Loading