Keywords: Graph Neural Network, Acceleration, Compressed Sensing, Graph Fourier
Abstract: Training Graph Neural Networks (GNNs) often relies on repeated, irregular, and expensive message-passing operations over all nodes (e.g., $N$), leading to high computational overhead. To alleviate this inefficiency, we revisit the GNNs training from a spectral perspective. In many real-world graphs, node features and embeddings exhibit sparse representation in the Graph Fourier domain. This inherent spectral sparsity aligns well with the principles of Compressed Sensing, which posits that signals sparse in one transform domain can be accurately reconstructed from a significantly reduced number of measurements. This observation motivates the design of a more efficient GNNs that operates predominantly in compressed spectral subspace. Thus, we propose You Only Spectralize Once (YOSO), a GNN training scheme that performs single Graph Fourier Transformation to project features onto a learnable orthonormal Fourier basis, retaining only $M$ spectral coefficients ($M \ll N$). The entire GNN computation is then carried out in reduced spectral domain. Final full-graph embeddings are recovered only at output layer by solving a bounded $\ell_{2,1}$-regularized optimization problem. Theoretically, drawing upon Compressed Sensing theory, we prove stable recovery throughout training by showing that the projection onto our learnable Fourier basis can satisfy the Restricted Isometry Property when $M=\mathcal{O}(k \log N)$ for $k$-row-sparse spectra, acting as the measurement process. Empirically, YOSO achieves an average 74\% reduction in training time across five benchmark datasets compared to state-of-the-art methods, while maintaining competitive accuracy.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 20653
Loading