Does Depth Really Hurt GNNs? Injective Message Passing Enables Deep Graph Learning

10 May 2025 (modified: 29 Oct 2025)Submitted to NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks
Abstract: Graph Neural Networks (GNNs) have shown great promise across domains, yet their performance often degrades with increased depth, commonly attributed to the oversmoothing phenomenon. This has led to a prevailing belief that depth inherently hurts GNNs. In this paper, we challenge this view and argue that the root cause is not depth itself, but the lack of *injectivity* in standard message passing (MP) mechanisms, which fail to preserve structural information across layers. To address this matter, we propose a new message passing layer that is provably injective without requiring any training and guarantees that GNNs match the expressive power of the Weisfeiler-Lehman (WL) test by *design*. Furthermore, this injective MP enables a decoupled GNN architecture where a shallow stack of injective MP layers ensures structural expressivity, followed by a deep stack of feature learning layers for rich representation learning. We provide theoretical analysis on the required depth, width, and initialization of MP layers to ensure both expressivity and numerical stability. Empirically, we demonstrate that our architecture enables deeper GNNs without suffering from oversmoothing. Our findings suggest that depth is not the core limitation in GNNs—lack of injectivity is—and offer a new perspective on building deeper and more expressive GNNs.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 14380
Loading