On the Interplay between Graph Structure and Learning Algorithms in Graph Neural Networks

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This paper analyzes effect of graph structure on learning algorithm for graph neural network
Abstract: This paper studies the interplay between learning algorithms and graph structure for graph neural networks (GNNs). Existing theoretical studies on the learning dynamics of GNNs primarily focus on the convergence rates of learning algorithms under the interpolation regime (noise-free) and offer only a crude connection between these dynamics and the actual graph structure (e.g., maximum degree). This paper aims to bridge this gap by investigating the excessive risk (generalization performance) of learning algorithms in GNNs within the generalization regime (with noise). Specifically, we extend the conventional settings from the learning theory literature to the context of GNNs and examine how graph structure influences the performance of learning algorithms such as stochastic gradient descent (SGD) and Ridge regression. Our study makes several key contributions toward understanding the interplay between graph structure and learning in GNNs. First, we derive the excess risk profiles of SGD and Ridge regression in GNNs and connect these profiles to the graph structure through spectral graph theory. With this established framework, we further explore how different graph structures (regular vs. power-law) impact the performance of these algorithms through comparative analysis. Additionally, we extend our analysis to multi-layer linear GNNs, revealing an increasing non-isotropic effect on the excess risk profile, thereby offering new insights into the over-smoothing issue in GNNs from the perspective of learning algorithms. Our empirical results align with our theoretical predictions, \emph{collectively showcasing a coupling relation among graph structure, GNNs and learning algorithms, and providing insights on GNN algorithm design and selection in practice.}
Lay Summary: Graph Neural Networks (GNNs) are widely used for analyzing data structured as graphs, such as social networks, molecular structures, or interconnected web pages. However, their performance heavily depends on the graph’s structure and the chosen training methods. This paper explores how different network configurations—such as evenly connected networks versus those dominated by a few highly connected nodes—influence the success of common training algorithms. Additionally, the authors investigate deeper GNNs, identifying conditions that explain why deeper networks often face challenges, known as "over-smoothing." Understanding these factors helps in selecting or designing GNNs that perform better in practical applications.
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: learning theory, graph neural network, graph learning
Submission Number: 3305
Loading