A Graph Laplacian Eigenvector-based Pre-training Method for Graph Neural Networks

Published: 13 Nov 2025, Last Modified: 14 Nov 2025TAG-DS 2025 FlashTalkEveryoneRevisionsBibTeXCC BY 4.0
Track: Extended Abstract (non-archival, 4 pages)
Keywords: Graph foundation models, graph neural networks, self-supervised training, pre-training, spectral graph theory, deep learning
TL;DR: We propose LELM, a novel framework for pretraining Graph Neural Networks (GNNs) by inductively learning Laplacian eigenvectors.
Abstract: The development of self-supervised graph pre-training methods is a crucial ingredient in recent efforts to design robust graph foundation models (GFMs). Structure-based pre-training methods are under-explored yet crucial for downstream applications which rely on underlying graph structure. In addition, pre-training traditional message passing GNNs to capture global and regional structure is often challenging due to the risk of oversmoothing as network depth increases. We address these gaps by proposing the Laplacian Eigenvector Learning Module (LELM), a novel pre-training module for graph neural networks (GNNs) based on predicting the low-frequency eigenvectors of the graph Laplacian. Moreover, LELM introduces a novel architecture that overcomes oversmoothing, allowing the GNN model to learn long-range interdependencies. Empirically, we show that models pre-trained via our framework outperform baseline models on downstream molecular property prediction tasks.
Supplementary Material: zip
Submission Number: 35
Loading