TL;DR: The prediction approximation of a k-layer MPNN is equivalent to a single layer GNN with $k$-hop neighbors.
Abstract: While Graph Neural Networks (GNNs) have achieved remarkable success, their design largely relies on empirical intuition rather than theoretical understanding. In this paper, we present a comprehensive analysis of GNN behavior through three fundamental aspects:
(1) we establish that \textbf{$k$-layer} Message Passing Neural Networks efficiently aggregate \textbf{$k$-hop} neighborhood information through iterative computation,
(2) analyze how different loop structures influence neighborhood computation, and (3) examine behavior across structure-feature hybrid and structure-only tasks.
For deeper GNNs, we demonstrate that gradient-related issues, rather than just over-smoothing, can significantly impact performance in sparse graphs. We also analyze how different normalization schemes affect model performance and how GNNs make predictions with uniform node features, providing a theoretical framework that bridges the gap between empirical success and theoretical understanding.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: MPNN, GNN, Matrix Multiplication
Submission Number: 640
Loading