DR-CFGNN: A Completion-Aware Framework for Counterfactual Explainability in Graph Neural Networks

ICLR 2026 Conference Submission18779 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable AI, Counterfactual Explanations, Graph Neural Networks
TL;DR: A novel framework for counterfactual explainability in graph neural networks (GNN) that considers both edge removals and edge additions.
Abstract: In this study, we propose a novel framework for counterfactual explainability in graph neural networks (GNNs). To the best of our knowledge, this is the first generic, model-agnostic method for local-level GNN explainability that considers both edge removal and edge assertion. The approach takes advantage of the progress achieved in factual explainability, coupling it with an encoder-decoder deep learning model to learn valid and robust graph expansions. In addition to standard benchmark datasets, we evaluate our method on a new variant of a popular synthetic dataset to study how explainability is influenced by data incompleteness, a common characteristic of real-world graph data. A multi-faceted experimental analysis with both established metrics from relevant literature and novel ones aimed at assessing the validity and the quality of explanations, demonstrates the advancement that our proposed approach brings to state-of-the-art baselines.
Primary Area: interpretability and explainable AI
Submission Number: 18779
Loading