Abstract: Explainability is crucial for the deployment of Graph Neural Networks (GNNs) in real-world applications. Unfortunately, existing explanation methods primarily focus on identifying important graph components, such as nodes and edges, rather than providing insights into the fundamental message passing mechanisms of GNNs. This shortcoming impedes our understanding of how GNNs make predictions and limits their deployment in critical applications. In this paper, we introduce Revelio, a novel method to provide faithful explanations of message flows in GNNs. Revelio leverages a learning-based approach to quantify the importance of message flows, excelling in terms of faithfulness, compatibility, and efficiency. Our extensive experiments on both synthetic and real-world datasets demonstrate the superiority of Revelio through quantitative and qualitative assessments.
External IDs:dblp:conf/icde/HeKH25
Loading