Keywords: graph neural network, deep GNN, oversmoothing
TL;DR: Oversmoothing is an overestimated problem.
Abstract: Oversmoothing has been recognized as a main obstacle to building deep Graph Neural Networks (GNNs), limiting the performance. This paper argues that the influence of oversmoothing has been overstated and advocates for a further exploration of deep GNN architectures. Given the three core operations of GNNs, aggregation, linear transformation, and non-linear activation, we show that prior studies have mistakenly confused oversmoothing with the vanishing gradient, caused by transformation and activation rather than aggregation. Our finding challenges prior beliefs that oversmoothing in GNNs is dominated by the GNN-specific structure of aggregation. Furthermore, we demonstrate that classical solutions such as skip connections and normalization enable the successful stacking of deep GNN layers without performance degradation. Our results clarify misconceptions about oversmoothing and highlight the untapped potential of deep GNNs.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 3335
Loading