A Spectral Characterization of Generalization in GCN: Escaping the Curse of Dimensionality

ICLR 2026 Conference Submission21250 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generalization theory, Graph Neural Networks, Spectral theory
Abstract: Empirically it is observed that Graph Convolution Networks (GCNs) often generalize better than fully connected neural networks (FCNNs) on graph-structured data. While this observation is often attributed to the ability of GCNs to exploit knowledge about the underlying graph structure, a rigorous theoretical explanation remains limited. In this work, we theoretically prove that one factor for the improved generalization of GCNs arises from the spectral representation of the filters or graph convolutional layers. Specifically, we derive generalization bounds that are independent of the number of parameters and instead scale nearly linearly with the number of graph nodes, offering a compelling explanation for their superior performance in over-parameterized regimes. Furthermore, in the limit of infinite number of nodes, we prove that under certain regularity conditions on the spectrum, GCNs escape the curse of dimensionality and continue to generalize well. We demonstrate our conclusions through numerical experiments.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21250
Loading