The GNN as a Low-Pass Filter: A Spectral Perspective on Achieving Stability in Neural PDE Solvers

Published: 23 Sept 2025, Last Modified: 27 Oct 2025NPGML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Implicit Regularization, Spectral Graph Theory, Neural PDE Solvers, Hamilton-Jacobi-Bellman Equation, Stability Analysis, Low-Pass Filter, Optimal Control
TL;DR: This paper demonstrates that a GNN's implicit bias as a spectral low-pass filter provides the necessary regularization to stably solve complex, non-smooth PDEs where unstructured networks fail.
Abstract: The choice of architecture in Graph Machine Learning (GML) presents a fundamental trade-off between the expressive power of universal approximators and the implicit regularization conferred by structured models like Graph Neural Networks (GNNs). This paper provides a principled framework for navigating this trade-off, using the challenging scientific domain of solving high-dimensional Hamilton-Jacobi-Bellman (HJB) partial differential equations as a testbed. Through a series of controlled experiments, we demonstrate that while flexible, unstructured networks excel for problems with smooth, globally-structured solutions, they fail catastrophically on problems with complex, non-smooth features. We connect this success to the GNN's established properties as a spectral low-pass filter, demonstrating how this provides the implicit Lipschitz regularization needed to learn stable and generalizable solutions. This prevents the numerical instabilities that plague unconstrained models. Our findings culminate in a framework that connects the mathematical properties of a problem's solution space to the optimal choice of GML architecture, offering a new perspective on the role of architectural bias as a powerful regularization tool.
Submission Number: 15
Loading