Towards Better Generalization with Flexible Representation of Multi-Module Graph Neural Networks

Published: 25 Jul 2023, Last Modified: 25 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Graph neural networks (GNNs) have become compelling models designed to perform learning and inference on graph-structured data. However, little work has been done to understand the fundamental limitations of GNNs for scaling to larger graphs and generalizing to out-of-distribution (OOD) inputs. In this paper, we use a random graph generator to systematically investigate how the graph size and structural properties affect the predictive performance of GNNs. We present specific evidence that the average node degree is a key feature in determining whether GNNs can generalize to unseen graphs, and that the use of multiple node update functions can improve the generalization performance of GNNs when dealing with graphs of multimodal degree distributions. Accordingly, we propose a multi-module GNN framework that allows the network to adapt flexibly to new graphs by generalizing a single canonical nonlinear transformation over aggregated inputs. Our results show that the multi-module GNNs improve the OOD generalization on a variety of inference tasks in the direction of diverse structural features.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Artem_Babenko1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1108
Loading