Rethinking Higher-order Representation Learning with Graph Neural Networks

Published: 18 Nov 2023, Last Modified: 30 Nov 2023LoG 2023 PosterEveryoneRevisionsBibTeX
Keywords: graph neural networks, higher-order representaton, expressiveness
TL;DR: Study the expressive power of higher-order graph representation learning methods
Abstract: In the field of graph machine learning, graph neural networks (GNNs) are promising models for learning graph representations and node representations. However, many GNNs perform poorly on learning higher-order representations such as links due to their limited expressive power. Zhang et al. summarize recent advances in link prediction and propose labeling trick as a common framework for learning node set representations with GNNs. However, their theory is limited to employing an ideally expressive GNN as the backend, and can only justify a limited series of link prediction models. In this paper, we take a further step to study the expressive power of various higher-order representation learning methods. Our analysis begins with showing the inherent symmetry between node labeling and higher-order GNNs, which directly justifies previous labeling trick methods (SEAL, GraIL) and other node labeling methods (ID-GNN, NBFNet), also higher-order GNNs through a unfied framework. Then, we study the utilization of MPNNs for computing representations in these methods, and show the expressive power upper bounds under these situations. After that, we provide a comprehensive analysis about how these previous methods surpass plain GNNs by showing their ability to capture path information. Finally, using the intuitions provided by the analysis, we propose an extremely simple method for link prediction tasks, which we believe could bring insights for designing more complicated and powerful models in the future.
Supplementary Materials: zip
Submission Type: Full paper proceedings track submission (max 9 main pages).
Poster: png
Poster Preview: png
Submission Number: 106
Loading