Demystifying Oversmoothing in Attention-Based Graph Neural Networks

Published: 21 Sept 2023, Last Modified: 11 Jan 2024NeurIPS 2023 spotlightEveryoneRevisionsBibTeX
Keywords: graph neural networks, attention mechanisms, oversmoothing, dynamical systems, theory
TL;DR: We rigorously establish that oversmoothing happens exponentially as model depth increases for attention-based graph neural networks.
Abstract: Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon where increasing network depth leads to homogeneous node representations. While previous work has established that Graph Convolutional Networks (GCNs) exponentially lose expressive power, it remains controversial whether the graph attention mechanism can mitigate oversmoothing. In this work, we provide a definitive answer to this question through a rigorous mathematical analysis, by viewing attention-based GNNs as nonlinear time-varying dynamical systems and incorporating tools and techniques from the theory of products of inhomogeneous matrices and the joint spectral radius. We establish that, contrary to popular belief, the graph attention mechanism cannot prevent oversmoothing and loses expressive power exponentially. The proposed framework extends the existing results on oversmoothing for symmetric GCNs to a significantly broader class of GNN models, including random walk GCNs, Graph Attention Networks (GATs) and (graph) transformers. In particular, our analysis accounts for asymmetric, state-dependent and time-varying aggregation operators and a wide range of common nonlinear activation functions, such as ReLU, LeakyReLU, GELU and SiLU.
Supplementary Material: pdf
Submission Number: 10034
Loading