Global Meta-path-level Counterfactual Explanation for Heterogeneous Graph Neural Networks by Path Exclusion
Keywords: Heterogeneous graph neural network, Counterfactual explanation, Explainable artificial intelligence
Abstract: Heterogeneous graph neural networks (HGNNs) capture rich structural and semantic information in real-world data, but their decision processes are often opaque. While recent work in graph counterfactual explanations perturbs nodes, edges, or subgraphs to identify influential components, such approaches are too coarse for heterogeneous graphs: a single perturbation can disrupt many meta paths, obscuring which relations truly drive model behavior. Since meta paths are fundamental units of semantic information in HGNNs, explanations at the meta-path level are both natural and necessary.
We introduce meta path exclusion, a framework for global counterfactual explanations that directly perturbs specific meta paths. To achieve this, we propose the spare path algorithm, which modifies the forward pass of HGNNs to exclude target meta paths while preserving the rest of the graph, enabling precise attribution of model performance to individual paths.
Experiments on four benchmark datasets (DBLP, OGB-MAG, IMDB, and MovieLens) show that excluding a small set of meta paths can significantly reduce accuracy, revealing their critical role. Moreover, when these identified paths are reused in meta-path-dependent HGNNs, their removal consistently degrades accuracy by 5–12\%, confirming their general importance. Our results establish meta paths as atomic units for fine-grained, faithful counterfactual explanations in heterogeneous graphs.
Primary Area: interpretability and explainable AI
Submission Number: 8805
Loading