Enhancing Spectral GNNs: From Topology and Perturbation Perspectives

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a sheaf Laplacian, which has more distinct eigenvalues and preserves the graph's topological information, enhancing the expressive power of spectral GNNs.
Abstract: Spectral Graph Neural Networks process graph signals using the spectral properties of the normalized graph Laplacian matrix. However, the frequent occurrence of repeated eigenvalues limits the expressiveness of spectral GNNs. To address this, we propose a higher-dimensional sheaf Laplacian matrix, which not only encodes the graph's topological information but also increases the upper bound on the number of distinct eigenvalues. The sheaf Laplacian matrix is derived from carefully designed perturbations of the block form of the normalized graph Laplacian, yielding a perturbed sheaf Laplacian (PSL) matrix with more distinct eigenvalues. We provide a theoretical analysis of the expressiveness of spectral GNNs equipped with the PSL and establish perturbation bounds for the eigenvalues. Extensive experiments on benchmark datasets for node classification demonstrate that incorporating the perturbed sheaf Laplacian enhances the performance of spectral GNNs.
Lay Summary: Many graph neural networks (GNNs) process graph-structured data (or signals) by leveraging the spectral properties of a special matrix or operator known as the graph Laplacian. When the normalized graph Laplacian has repeated eigenvalues, a GNN’s ability to process data in the spectral domain is weakened, limiting the model’s expressiveness. Drawing on cellular sheaf theory, we use the sheaf Laplacian—associating each edge with a small vector space—and apply carefully designed perturbations to its block structure to construct a higher-dimensional perturbed sheaf Laplacian (PSL) with a richer spectrum of eigenvalues. Our theoretical analysis and empirical experiments both demonstrate the effectiveness of PSL-based spectral GNNs.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: spectral GNNs, perturbation theory, expressiveness, sheaf Laplacian
Submission Number: 3709
Loading