Graph neural networks extrapolate out-of-distribution for shortest paths

ICLR 2026 Conference Submission20696 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Shortest paths, extrapolation, ood generalization, neural algorithmic alignment, graph neural network
TL;DR: This paper provides the first rigorous guarantee that graph neural networks can extrapolate out-of-distribution, establishing a novel approach for studying generalization in neural algorithmic reasoning.
Abstract: Neural networks (NNs), despite their success and wide adoption, still struggle to extrapolate out-of-distribution (OOD), i.e., to inputs that are not well-represented by their training dataset. Addressing the OOD generalization gap is crucial when models are deployed in environments significantly different from the training set, such as applying Graph Neural Networks (GNNs) trained on small graphs to large, real-world graphs. One promising approach for achieving robust OOD generalization is the framework of neural algorithmic alignment, which incorporates ideas from classical algorithms by designing neural architectures that resemble specific algorithmic paradigms (e.g. dynamic programming). The hope is that trained models of this form would have superior OOD capabilities, in much the same way that classical algorithms work for all instances. We employ sparsity regularization as a tool for analyzing the role of algorithmic alignment in achieving OOD generalization, focusing on graph neural networks (GNNs) applied to the canonical shortest path problem. We prove that GNNs, trained to minimize a sparsity-regularized loss over a small set of shortest path instances, are guaranteed to extrapolate to arbitrary shortest-path problems, including instances of any size. In fact, if a GNN minimizes this loss within an error of $\epsilon$, it computes shortest path distances up to $O(\epsilon)$ on instances. Our empirical results support our theory by showing that NNs trained by gradient descent are able to minimize this loss and extrapolate in practice.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 20696
Loading