Large-Scale Spectral Graph Neural Networks via Laplacian Sparsification

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: spectral graph neural networks, scalability, Laplacian sparsification
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We propose a method based on Laplacian sparsification to extend the scalability of spectral graph neural networks.
Abstract: Graph Neural Networks (GNNs) play a pivotal role in graph-based tasks for their proficiency in representation learning. Among the various GNN methods, spectral GNNs employing polynomial filters have shown promising performance on both homophilous and heterophilous graph structures. The scalability of spectral GNNs is limited because forward propagation requires multiple graph propagation executions, corresponding to the degree of the polynomial. On the other hand, scalable spectral GNNs detach the graph propagation and linear layers, allowing the message-passing phase to be pre-computed and ensuring effective scalability on large graphs. However, this pre-computation can disrupt end-to-end training, possibly impacting performance, and becomes impractical when dealing with high-dimensional input features. In response to these challenges, we propose a novel graph spectral sparsification method to approximate the propagation pattern of spectral GNNs. We prove that our proposed methods generate Laplacian sparsifiers for the random-walk matrix polynomial, incorporating both static and learnable polynomial coefficients. By considering multi-hop neighbor interactions into one-hop operations, our approach facilitates the use of scalable techniques. To empirically validate the effectiveness of our methods, we conduct an extensive experimental analysis on datasets spanning various graph scales and properties. The results show that our method yields superior results in comparison with the corresponding approximated base models.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7522
Loading