Expressivity-Preserving GNN Simulation

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Graph Neural Networks, GNNs, Graphs, Message Passing, Expressiveness, Graph Transformations, Message Passing Graph Neural Networks
TL;DR: We systematically investigate graph transformations that enable standard message passing to simulate state-of-the-art graph neural networks without loss of expressiveness.
Abstract: We systematically investigate graph transformations that enable standard message passing to simulate state-of-the-art graph neural networks (GNNs) without loss of expressivity. Using these, many state-of-the-art GNNs can be implemented with message passing operations from standard libraries, eliminating many sources of implementation issues and allowing for better code optimization. We distinguish between weak and strong simulation: weak simulation achieves the same expressivity only after several message passing steps while strong simulation achieves this after every message passing step. Our contribution leads to a direct way to translate common operations of non-standard GNNs to graph transformations that allow for strong or weak simulation. Our empirical evaluation shows competitive predictive performance of message passing on transformed graphs for various molecular benchmark datasets, in several cases surpassing the original GNNs.
Submission Number: 12895
Loading