GNN-VPA: A Variance-Preserving Aggregation Strategy for Graph Neural Networks

Published: 05 Mar 2024, Last Modified: 12 May 2024PML4LRS PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph neural networks, variance preserving aggregation, expressivity, signal propagation
TL;DR: Using signal propagation theory, we propose a variance-preserving aggregation function, which maintains the expressivity of GNNs while improving learning dynamics.
Abstract: Graph neural networks (GNNs), and especially message-passing neural networks, excel in a variety of domains such as physics, drug discovery, and molecular modeling. In low resource settings, it is crucial for stochastic gradient descent to promptly optimize the objective meaningfully rather than spending initial iterations on adjusting weights towards suitable value ranges for efficiently reducing the loss. In accordance with signal propagation theory, we propose a variance-preserving aggregation function (VPA) for message aggregation and graph-level readout to achieve such favorable forward and backward dynamics. Moreover, VPA maintains the expressivity of GNNs with respect to their ability to discriminate non-isomorphic graphs. Experiments demonstrate that VPA leads to increased predictive performance for popular GNN architectures as well as improved learning dynamics. Our results could pave the way towards even more efficient GNNs by enabling normalizer-free or self-normalizing architectures.
Submission Number: 30
Loading