Keywords: Graph neural networks, GNN explainability, Semi-supervised learning, Pseudo-labeling
TL;DR: We turn weak, noisy soft scores from GNN explainers into sharp, binarized edge supervision through self-guided pseudo-labeling, achieving both higher accuracy and human-level interpretability.
Abstract: Post-hoc explanation for graph neural networks (GNNs) is the task of explaining their decisions by identifying important subgraphs.
Since discretization is non-differentiable, most prior work models explainers that output continuous edge-importance scores, often yielding blurry, mixed score distributions.
This stems from optimizing only to preserve the original prediction without effective regularization and in the absence of edge-level ground-truth labels.
We present 3SG-Explainer (Semi-Supervised and Self-Guided Explainer), which converts weak prediction-preserving signals into explicit edge supervision and, in turn, markedly improves explanation accuracy while sharply polarizing edges into important versus unimportant.
Concretely, we introduce confidence-based thresholds to convert noisy soft scores into semi-supervised pseudo-labels, then train a lightweight message-passing explainer on these labels.
We also prove that the improved shapes of the score distributions produced by 3SG-Explainer hold against unsupervised baselines.
Experiments on four benchmarks and multiple metrics show that 3SG-Explainer improves the accuracy for edge-level explanation over state-of-the-art baselines.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 15423
Loading