Keywords: GNNs, generalization, node prediction, link prediction
TL;DR: We provide a unified theoretical framework to study the generalization properties of modern node and link prediction architectures.
Abstract: Using message-passing graph neural networks (MPNNs) for node and link prediction is
crucial in various scientific and industrial domains, which has led to the development of
diverse MPNN architectures. Besides working well in practical settings, their ability to
generalize beyond the training set remains poorly understood. While some studies
have explored MPNNs’ generalization in graph-level prediction tasks, much less
attention has been given to node- and link-level predictions. Existing works often rely on
unrealistic i.i.d. assumptions, overlooking possible correlations between nodes or links,
and assuming fixed aggregation and impractical loss functions while neglecting the
influence of graph structure. In this work, we introduce a unified framework to analyze
the generalization properties of MPNNs in inductive and transductive node and link
prediction settings, incorporating diverse architectural parameters and loss functions and
quantifying the influence of graph structure. Additionally, our proposed generalization
framework can be applied beyond graphs to any classification task under the inductive or
transductive setting. Our empirical study supports our theoretical insights, deepening our
understanding of MPNNs’ generalization capabilities in these tasks.
Submission Number: 40
Loading