AIMing for Explainability in GNNs

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, explainability, graph kernels
Abstract: As machine learning models become increasingly complex and are deployed in critical domains such as healthcare, finance, and autonomous systems, the need for effective explainability has grown. Graph Neural Networks (GNNs), which excel in processing graph-structured data, have seen significant advancements, but explainability for GNNs is still in its early stages. Existing approaches fall into two broad categories: post-hoc explainers and inherently interpretable models. Their evaluation is often limited to synthetic datasets for which ground truth explanations are available, or conducted with the assumption that each XAI method extracts explanations for a fixed network. We focus specifically on inherently interpretable GNNs (e.g., based on prototypes, graph kernels) which enable model-level explanations. For evaluation, these models claim inherent interpretability and only assess predictive accuracy, without applying concrete interpretability metrics. These evaluation practices fundamentally restrict the utility of any discussions regarding explainability. We propose a unified and comprehensive framework for measuring and evaluating explainability in GNNs that extends beyond synthetic datasets, ground-truth constraints, and rigid assumptions, while also supporting the development and refinement of models based on derived explanations. The framework involves measures of Accuracy, Instance-level explanations, and Model-level explanations (AIM), inspired by the generic Co-12 conceptual properties of explanations quality (Nauta et al., 2023). We apply this framework to a suite of existing models, deriving ways to extract explanations from them and to highlight their strengths and weaknesses. Furthermore, based on this analysis using AIM, we develop a new model called XGKN that demonstrates improved explainability while performing on par with existing models. Our approach aims to advance the field of Explainable AI (XAI) for GNNs, offering more robust and practical solutions for understanding and interpreting complex models.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12108
Loading