Can strong structural encoding reduce the importance of Message Passing?

Published: 18 Jun 2023, Last Modified: 02 Jul 2023TAGML2023 PosterEveryoneRevisions
Keywords: graph neural networks, geometric deep learning, topology, structural encoding
TL;DR: This paper provides empirical evidence that the importance of message passing is limited when the model can construct strong structural encodings.
Abstract: The most prevalent class of neural networks operating on graphs are message passing neural networks (MPNNs), in which the representation of a node is updated iteratively by aggregating information in the 1-hop neighborhood. Since this paradigm for computing node embeddings may prevent the model from learning coarse topological structures, the initial features are often augmented with structural information of the graph, typically in the form of Laplacian eigenvectors or Random Walk transition probabilities. In this work, we explore the contribution of message passing when strong structural encodings are provided. We introduce a novel way of modeling the interaction between feature and structural information based on their tensor product rather than the standard concatenation. The choice of interaction is compared in common scenarios and in settings where the capacity of the message-passing layer is severely reduced and ultimately the message-passing phase is removed altogether. Our results indicate that using tensor-based encodings is always at least on par with the concatenation-based encoding and that it makes the model much more robust when the message passing layers are removed, on some tasks incurring almost no drop in performance. This suggests that the importance of message passing is limited when the model can construct strong structural encodings.
Supplementary Materials: zip
Type Of Submission: Proceedings Track (8 pages)
Submission Number: 26
Loading