Keywords: Graph neural networks, deep learning, pre-training, graph foundation models, spectral graph theory
TL;DR: We propose LELM, a novel framework for pretraining Graph Neural Networks (GNNs) by inductively learning Laplacian eigenvectors.
Abstract: We propose the Laplacian Eigenvector Learning Module (LELM), a novel pre-training module for graph neural networks (GNNs). Traditional message-passing GNNs often struggle to capture global and regional graph structure due to over-smoothing risk as network depth increases. Because the low-frequency eigenvectors of the graph Laplacian matrix encode global information, pre-training GNNs to predict these eigenvectors encourages the network to naturally learn large-scale structural patterns over each graph. Empirically, we show that models pre-trained via our framework outperform baseline models on a variety of graph structure-based tasks. While most existing pre-training methods focus on domain-specific tasks such as feature reconstruction, our self-supervised pre-training framework is structure-based and highly flexible; we show that LELM can be used both as an independent pre-training task and as a plug-in addition to a variety of existing pre-training pipelines.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 15161
Loading