TAME: A Task-Agnostic Framework for Robust Graph Neural Network Explanations via Structural Mixup

07 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Explainability, Out-of-Distribution Generalization
Abstract: Graph Neural Networks (GNNs) have demonstrated remarkable performance across a range of applications involving graph-structured data, particularly in high-stakes domains. However, the opaque nature of their decision-making processes limits their trustworthiness and broader adoption. Existing post-hoc explanatory methods aim to improve interpretability by identifying subgraphs that influence GNN predictions. Yet, these approaches are typically restricted to a specific type of task, such as classification with discrete decision boundaries or regression with continuous ones, which limits their general applicability. In this work, we propose TAME, a unified, task-agnostic framework for GNN explanation that addresses both the limitations of task-specific methods and the distribution shift caused by subgraph extraction. Our approach integrates contrastive learning into the Graph Information Bottleneck (GIB) framework, enabling consistent explanation across both classification and regression tasks. Furthermore, we introduce a novel mixup strategy built upon graph pooling, which generates in-distribution explanations through hard structural perturbations. Extensive experiments on diverse tasks demonstrate that TAME achieves state-of-the-art performance in generating robust and interpretable explanations across both synthetic and real-world datasets.
Supplementary Material: pdf
Primary Area: interpretability and explainable AI
Submission Number: 2795
Loading