Interactive and Explainable Graph Neural Networks with Uncertainty Awareness and Adaptive Human Feedback

18 Sept 2025 (modified: 28 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretable graph neural network (GNN), Optimal‑Transport Alignment,Uncertainty estimation
Abstract: Current graph neural networks (GNNs) face fundamental challenges that hinder their deployment in real-world applications: (1) their inability to dynamically estimate uncertainty and quantify confidence in learned relationships, and (2) their failure to effectively incorporate human feedback for real-time model refinement. To address these challenges, we propose a unified probabilistic framework: Interactive Graph Explainability with Uncertainty that seamlessly integrates uncertainty-aware learning with human-in-the-loop adaptation. Our approach estimates uncertainty-sensitive weighting and develops a systematic methodology for incorporating expert feedback to correct erroneous relational inferences. At its core, the framework models explanatory subgraph selection through a learnable latent variable approach, assigning sparsity-constrained importance scores to edges while adaptively adjusting subgraph sizes based on instance complexity. This yields interpretable explanations with calibrated uncertainty estimates without compromising predictive performance. We ensure representation fidelity through a differentiable objective that aligns subgraph embeddings with the original graph's predictive information. Crucially, our system enables interactive refinement, where domain experts can directly modify explanations (e.g., by adding or removing edges), with the model dynamically integrating this feedback to improve subsequent inferences. Experimental results demonstrate that our method generates more concise and informative explanations than existing approaches while maintaining competitive accuracy. Also, the integrated feedback mechanism further enhances explanation quality, validating the benefits of combining probabilistic modeling with human feedback.
Primary Area: interpretability and explainable AI
Submission Number: 10389
Loading