Keywords: Graph neural networks, oversmoothing, decision and control
TL;DR: We propose a continuous-depth GNN inspired by behavioral interaction, which is provably robust to oversmoothing with well behaved gradients and adaptability across homophilic and heterophilic datasets.
Abstract: While graph neural networks (GNNs) have allowed researchers to successfully apply neural networks to non-Euclidean domains, deep GNNs often exhibit lower predictive performance than their shallow counterparts. This phenomena has been attributed in part to oversmoothing, the tendency of node representations to become increasingly similar with network depth. In this paper we introduce an analogy between oversmoothing in GNNs and consensus (i.e., perfect agreement) in multi-agent systems literature. We show that the message passing algorithms of several GNN models are equivalent to linear opinion dynamics in multi-agent systems, which have been shown to converge to consensus for all inputs regardless of the initial state. This new perspective on oversmoothing motivates the use of nonlinear opinion dynamics as an inductive bias in GNN models. In addition to being more general than the linear opinion dynamics model, nonlinear opinion dynamics models can be designed to converge to dissensus for general inputs. Through extensive experiments we show that our Behavior-inspired message passing (BIMP) neural network resists oversmoothing beyond 100 time steps and consistently outperforms existing continuous time GNNs even when amended with oversmoothing mitigation techniques. We also show several desirable properties including well behaved gradients and adaptability to homophilic and heterophilic datasets.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 14024
Loading