TL;DR: We propose a pruning-based graph OOD method to prune spurious edges instead of directly identifying invariant edges for OOD generalization
Abstract: Graph Neural Networks often encounter significant performance degradation under distribution shifts between training and test data, hindering their applicability in real-world scenarios. Recent studies have proposed various methods to address the out-of-distribution (OOD) generalization challenge, with many methods in the graph domain focusing on directly identifying an invariant subgraph that is predictive of the target label. However, we argue that identifying the causal edges directly is challenging and error-prone, especially when some spurious edges exhibit strong correlations with the targets. In this paper, we propose *PrunE*, the first pruning-based graph OOD method that eliminates spurious edges to improve OOD generalization. By pruning spurious edges, _PrunE_ preserves the invariant subgraph more effectively than traditional methods that attempt to directly identify it, thereby enhancing OOD generalization ability. Specifically, *PrunE* employs two regularization terms to prune spurious edges: 1) _graph size constraint_ to exclude uninformative spurious edges, and 2) _$\epsilon$-probability alignment_ to further suppress the occurrence of spurious edges. Through theoretical analysis and extensive experiments, we show that *PrunE* achieves superior OOD performance and outperforms previous state-of-the-art methods significantly.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: invariant learning, out-of-distribution generalization, graph neural networks
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Submission Number: 6548
Loading