Out-of-Distribution Robust Explainer for Graph Neural Networks

ICLR 2026 Conference Submission10825 Authors

18 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Network, Explainable Artificial Intelligence, Out-of-distribution
Abstract: Graph Neural Networks (GNNs) are powerful tools for analyzing graph-structured data; however, their interpretability remains a challenge, leading to the growing use of eXplainable AI (XAI) methods. Most existing XAI models assume that GNNs are well-trained and that all nodes in the graph share similar data characteristics to those used during GNN training. In real-world applications, new nodes and edges are frequently added to the input graph during testing. This dynamic environment can introduce out-of-distribution (OOD) nodes, potentially undermining the reliability of XAI models. To address this issue, we propose an OOD Robust Explainer (ORExplainer), a post-hoc, instance-level explanation model specifically designed to provide robust and reliable explanations in the presence of OOD nodes, noise, and outliers in graphs. ORExplainer incorporates Energy Scores to capture structural dependencies, allowing for prioritizing in-distribution nodes while reducing the impact of OOD nodes. We conduct experiments with varying types of OOD node inclusion. ORExplainer demonstrates superior robustness of generated explanations across synthetic and real-world datasets. Our code is available at https://anonymous.4open.science/r/ORExplainer-C52C/.
Primary Area: interpretability and explainable AI
Submission Number: 10825
Loading