Machine Learning on Large-Scale Graphs: Graphon NNs and Learning by Transference

Published: 21 May 2023, Last Modified: 14 Jul 2023SampTA 2023 AbstractReaders: Everyone
Abstract: Graph neural networks (GNNs) are successful at learning representations from most types of network data but suffer from limitations in large graphs, which do not have the Euclidean structure that time and image signals have in the limit. Yet, large graphs can often be identified as being similar to each other in the sense that they share structural properties. Indeed, graphs can be grouped in families converging to a common graph limit -- the graphon. A graphon is a bounded symmetric kernel which can be interpreted as both a random graph model and a limit object of a convergent sequence of graphs. Graphs sampled from a graphon almost surely share structural properties in the limit, which implies that graphons describe families of similar graphs. We can thus expect that processing data supported on graphs associated with the same graphon should yield similar results. In this work, I formalize this intuition by showing that the error made when transferring a GNN across two graphs in a graphon family is small when the graphs are sufficiently large, a property called transferability. I then extend this result to show that in scenarios where the training graph is still too large to achieve a small transferability error, the transferability property can be leveraged in the model training through an algorithm that learns the GNN on a sequence of growing graphs. Under mild assumptions, the GNN learned by this algorithm converges to the solution of the empirical risk minimization problem on the graphon in the limit. This enables large-scale graph machine learning via transference: training GNNs on (sequences of) moderate-scale graphs and executing them on large-scale graphs.
Submission Type: Abstract
0 Replies

Loading