Towards Better Propagation of Non-parametric GNNs

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Graph Neural Network, Scalability, Depth, Large Graphs
TL;DR: The first attempt to improve the propagation process of non-parametric scalable GNNs, and the SOTA solution on ogbn-papers100M.
Abstract: Recent works show great interest in designing Graph Neural Networks (GNNs) that scale to large graphs. While previous work focuses on designing advanced sampling techniques for existing GNNs, the design of non-parametric GNNs, an orthogonal direction for scalable performance, has aroused lots of concerns recently. For example, nearly all top solutions in the Open Graph Benchmark leaderboard are non-parametric GNNs. Unlike most GNNs which alternately do feature propagation and non-linear transformation in each GNN layer, non-parametric GNNs execute the non-parametric propagation in advance and then feed the propagated features into simple and scalable models (e.g., Logistic Regression). Despite their high predictive performance and scalability, non-parametric GNNs still face two limitations. First, due to the propagation of over-smoothed features, they suffer from severe performance degradation along with the propagation depth. More importantly, they only consider the graph structure and ignore the feature influence during the non-parametric propagation, leading to sub-optimal propagated features. To address these limitations, we present non-parametric attention (NPA), a plug-and-play module that is compatible with non-parametric GNNs, to get scalable and deep GNNs simultaneously. Experimental results on six homophilic graphs and five heterophilic graphs demonstrate NPA enjoys high performance -- achieves large performance gain over existing non-parametric GNNs, deeper architecture -- improves non-parametric GNNs with large model depth, and high scalability -- can support large-scale graphs with low time costs. Notably, it achieves state-of-the-art performance on the large ogbn-papers100M dataset.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4636
Loading