Keywords: Graph Neural Networks, Gradient, Topological Features, Interpretable, Reasoning, Explainable AI, Natural Language Processing
TL;DR: We introduce GraGR, a gradient-guided reasoning framework for GNNs that reduces gradient conflicts and enhances interpretability through gradient-derived features and dynamic reasoning pathways.
Abstract: We propose the GraGR framework, which leverages gradients as reasoning signals to address two intertwined challenges in GNNs: (1) node-level gradient inconsistency across neighbors, and (2) interpretability misalignment between model training and explanations. GraGR’s core modules detect and smooth conflicting per-node gradients via a conflict loss and Laplacian-based smoothing, and convert pairwise gradient inner-products into attention weights for message passing. We further introduce a meta-gradient scaling scheme (learnable task weights updated by hypergradients) to balance heterogeneous objectives when multiple tasks are present. Together, these components reduce local gradient misalignment and yield more stable, faithful explanations. We extend GraGR to GraGR++ by adding multi-pathway routing (parallel routing pathways) and an adaptive training scheduler that gates gradient reasoning until base convergence. Importantly, we define six gradient-derived node features that quantitatively characterize a node’s learning dynamics and offer interpretable insights. Experiments on benchmark datasets (Cora, Citeseer, PubMed, OGB-MolHIV) show that GraGR/GraGR++ improve predictive performance and explanation coherence compared to baselines, while significantly reducing the proposed conflict energy. This work unifies optimization and interpretability in GNNs under a gradient-as-reasoning paradigm, making node-level dynamics both correctable and explainable.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 12871
Loading