Shortest Path Networks for Graph Property PredictionDownload PDF

Published: 24 Nov 2022, Last Modified: 12 Mar 2024LoG 2022 OralReaders: Everyone
Keywords: Shortest Paths, Over-squashing, Graph Neural Networks, Message Passing
TL;DR: We propose the SP-MPNN framework to study message passing based on shortest paths in graph neural networks, exactly quantify its expressive power, and study its empirical performance on carefully designed synthetic baselines and common benchmarks.
Abstract: Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Our framework generalizes message passing neural networks, resulting in a class of more expressive models, including some recent state-of-the-art models. We verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, and obtain state-of-the art results.
Type Of Submission: Full paper proceedings track submission (max 9 main pages).
PDF File: pdf
Type Of Submission: Full paper proceedings track submission.
Software: https://github.com/radoslav11/SP-MPNN
Poster: png
Poster Preview: png
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2206.01003/code)
5 Replies

Loading