Keywords: Graph Neural Network, Heterogeneous Graph, Explainability
TL;DR: RoHeX is a robust explainer for heterogeneous graph neural networks that mitigates noise effects using denoising variational inference and heterogeneous semantics, outperforming state-of-the-art methods in generating explanations.
Abstract: Explaining the prediction process of Graph Neural Networks (GNNs) is critical for enhancing model transparency and trustworthiness. However, real-world graphs are predominantly heterogeneous and often suffer from structural noise, which severely hampers the reliability of existing explanation methods. To address this challenge, we propose RoHeX, a Robust Heterogeneous Graph Neural Network Explainer. RoHeX begins with a theoretical analysis of how different heterogeneous GNN architectures amplify noise through message passing. To mitigate this effect, we introduce a denoising variational inference framework that operates on the graph structure to extract robust latent representations. Furthermore, RoHeX incorporates heterogeneous edge semantics into the subgraph generation process and formulates explanation as an optimization problem under the graph information bottleneck principle. This enables RoHeX to generate explanations that are both semantically meaningful and structurally robust. Extensive experiments on multiple real-world heterogeneous graph datasets demonstrate that RoHeX significantly outperforms state-of-the-art baselines in terms of explanation quality and robustness to noise.
Primary Area: Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)
Submission Number: 6673
Loading