Position: Spectral GNNs Rely Less on Graph Fourier Basis than Conceived

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 Position Paper Track posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We challenge our current understanding on spectral graph learning.
Abstract: Spectral graph learning builds upon two foundations: Graph Fourier basis as its theoretical cornerstone,with polynomial approximation to enable practical implementation. While this framework has led to numerous successful designs, we argue that its effectiveness might stem from mechanisms different from its theoretical foundations. In this paper, we identify two fundamental issues that challenge our current understanding: (1) The graph Fourier basis $\mathbf{U}$ (eigenvectors of the normalized graph Laplacian) faces too many questions to truly serve its intended role, particularly in preserving its semantic properties of Fourier analysis; (2) The limitations preventing expressive filters are not merely practical constraints, but fundamental barriers that naturally protect stability and generalization. Importantly, the two issues entangle with each other. The second obscured the first: the natural avoidance of complex filters has prevented us from fully confronting the questions about $\mathbf{U}$'s role as a Fourier basis. This observation leads to our position: the effectiveness of spectral GNNs relies less on Graph Fourier basis than originally conceived, or, in other words, **spectral GNNs might not be so spectral**. The position leads us to at least two potential research interests: to incorporate a more semantically meaningful graph dictionary except for $\mathbf{U}$, and to re-examine the theoretical role of the introduced polynomial techniques.
Lay Summary: Have you ever heard of the Fourier basis? Its striking ability to represent global oscillations at different frequencies is nothing short of impressive. On graphs, researchers have tried to harness a similar idea by defining a “graph Fourier basis”—essentially transplanting the classical Fourier concept onto network structures. They treat it as a powerful tool for analyzing signals on graphs. **There are plenty of reasons to be fascinated by the graph Fourier basis.** In graph neural networks, practitioners first encode the graph as a Laplacian matrix and then seek ways to exploit its eigenvectors—i.e., the graph Fourier basis. What’s clever is that they often combine this approach with polynomial approximation techniques, which bypass the costly full spectral decomposition yet still allow the graph Fourier basis to be used in practice. **But is the graph Fourier basis really as useful as everyone assumes?** When we visualized these basis vectors on a 3D mesh of a horse, we observed that many of them clearly no longer exhibited the hallmark global oscillations (We also did other analysis). This led us to ask: **where does the belief come from that “the graph Fourier basis is semantically meaningful, just like the classical Fourier basis”?** We uncovered several factors: an unquestioning trust in mathematical analogies (even when those analogies jump too far), the influence of high-profile research directions, and a tendency to overgeneralize from familiar concepts. At the same time, we realized that polynomial approximation—used as the vehicle for employing the graph Fourier basis—naturally prevents the "graph Fourier atoms" from being revealed. In order to ensure basic stability and generalization, we rarely use polynomials that are complex enough. Therefore, our position paper takes a step back to reflect on how these technical developments have unfolded. It argues that we need to rethink our reliance on the graph Fourier basis and examine what we have achieved and why are the networks working.
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: Spectral Graph Learning; Critical Thinking; Graph Fourier Basis; Polynomial Methods
Submission Number: 266
Loading