Towards Stable, Globally Expressive Graph Representations with Laplacian Eigenvectors

26 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph neural networks, graph Laplacian eigenvectors
Abstract: Graph neural networks (GNNs) have achieved remarkable success in a variety of machine learning tasks over graph data. Existing GNNs usually rely on message passing, i.e., computing node representations by gathering information from the neighborhood, to build their underlying computational graphs. Such an approach has been shown fairly limited in expressive power, and often fails to capture global characteristics of graphs. To overcome the issue, a popular solution is to use Laplacian eigenvectors as additional node features, as they are known to contain global positional information of nodes, and can serve as extra node identifiers aiding GNNs to separate structurally similar nodes. Since eigenvectors naturally come with symmetries---namely, $O(p)$-group symmetry for every $p$ eigenvectors with equal eigenvalue, properly handling such symmetries is crucial for the stability and generalizability of Laplacian eigenvector augmented GNNs. However, using a naive $O(p)$-group invariant encoder for each $p$-dimensional eigenspace may not keep the full expressivity in the Laplacian eigenvectors. Moreover, computing such invariants inevitably entails a hard split of Laplacian eigenvalues according to their numerical identity, which suffers from great instability when the graph structure has small perturbations. In this paper, we propose a novel method exploiting Laplacian eigenvectors to generate *stable* and globally *expressive* graph representations. The main difference from previous works is that (i) our method utilizes **learnable** $O(p)$-invariant representations for each Laplacian eigenspace of dimension $p$, which are built upon powerful orthogonal group equivariant neural network layers already well studied in the literature, and that (ii) our method deals with numerically close eigenvalues in a **smooth** fashion, ensuring its better robustness against perturbations. Experiments on various graph learning benchmarks witness the competitive performance of our method, especially its great potential to learn global properties of graphs.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7169
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview