ACLIB-GNN: Incorporating Adversarial Causal Learning with Information Bottlenecks for Interpretable Graph Neural Networks
Keywords: causal graph learning, GNN Interpretability, adversarial learning, node classification
TL;DR: We propose an interpretable GNN that integrates adversarial causal learning and information bottlenecks for node classification tasks.
Abstract: Graph Neural Networks (GNNs) excel in node classification but face critical interpretability challenges. Though existing explanation methods that include post-hoc and self-interpretable approaches are widely adopted, they still struggle to enhance prediction through explanation effectively. Moreover, causal graph learning demonstrates the capacity to identify causal features that bolster predictive performance, but its utilization in node classification tasks has remained notably limited,primarily due to the non-trivial challenges of handling localized heterogeneity and contextual noise in node-level tasks. To address these gaps, we propose ACLIB-GNN, a novel framework unifying adversarial causal learning and the node information bottleneck. By leveraging graph attention to minimize noncausal feature interference and adversarial training to maximize mutual information between explanatory subgraphs and labels, it explicitly disentangles causal features from shortcut signals, balancing transparency and performance. On four benchmark datasets, ACLIB-GNN outperforms state-of-the-art baselines via causal subgraphs to enhance classification accuracy and provides superior explanatory power.Ablation studies validate the synergistic effect of its core components. Notably, the framework generalizes graph classification tasks effectively. ACLIB-GNN offers a scalable and trustworthy solution for interpretable node classification tasks based on causal graph learning.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 24840
Loading