Keywords: Graph Neural Networks, Causal Inference
Abstract: Graph Neural Networks (GNNs), despite their success, are fundamentally limited to learning a correlational mapping. We theoretically demonstrate that this limitation is inherent to the neighborhood aggregation paradigm of GNNs. This inability to distinguish true causality from spurious shortcut patterns leads to poor generalization ability. To bridge this gap, we introduce the Principle of Causal Alignment, a novel learning paradigm for GNNs, designed to empower GNNs with causal invariance without altering their architectures or compromising inference efficiency. We then present \texttt{CausGNN}, an instantiation of this principle. It employs a teacher-student strategy where a teacher GNN learns to compute the interventional distribution via backdoor adjustment, and then distills this causal logic into the student GNN, compelling it to learn invariant representations. Extensive experiments show that \texttt{CausGNN} not only improves the performance of various classic GNNs on node-level tasks but also exhibits superior robustness against noise and Out-Of-Distribution (OOD) challenges.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 17120
Loading