Keywords: graphs, gnns, spectral, k-harmonic, expressivity, mpnns
TL;DR: We explore the theoretical expressivity of graph neural networks enhanced by a class of spectral distances called the k-harmonic distances.
Abstract: Positional encodings from spectral graph theory---such as spectral distances like effective resistance---have been shown to enhance the performance of graph neural networks (GNNs). However, the theoretical expressive power of these spectral features is not entirely understood. While certain spectral features are known to increase expressive power, it is unclear if different spectral features are equally powerful. Moreover, while it is established that spectral distance measures can enhance the expressivity of transformer-based architectures, their implications for message passing neural networks (MPNNs) are relatively underexplored. In this work, we focus on one such family of spectral features: the $k$-harmonic distances. We establish upper and lower bounds on the expressivity of MPNNs augmented with $k$-harmonic distances and show that a finite set of $k$-harmonics collectively subsume all spectral features. We also show that not all $k$ are equally expressive, and some are better than others in certain situations. To corroborate this theory, we present several empirical results demonstrating $k$-harmonic distance's expressive power. We show its potential for computational efficiency over transformers in some cases Further, we experiment with making $k$ a learnable parameter and find that different datasets have different optimal values of $k$.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21226
Loading