The Self-loop Paradox: Investigating the Impact of Self-Loops on Graph Neural Networks

Published: 18 Nov 2023, Last Modified: 27 Nov 2023LoG 2023 PosterEveryoneRevisionsBibTeX
Keywords: GNNs, Message Passing, Self-loops, Node Classification, Graph Ensembles
TL;DR: Countering intuition, we show that the inclusion of self-loops in GNNs can decrease the information a node retains about itself.
Abstract: Many Graph Neural Networks (GNNs) add self-loops to a graph to include feature information about a node itself at each layer. However, if the GNN consists of more than one layer, this information can return to its origin via cycles in the graph topology. Intuition suggests that this “backflow” of information should be larger in graphs with self-loops compared to graphs without. In this work, we counter this intuition and show that for certain GNN architectures, the information a node gains from itself can be smaller in graphs with self-loops compared to the same graphs without. We adopt an analytical approach for the study of statistical graph ensembles with a given degree sequence and show that this phenomenon, which we call the *self-loop paradox*, can depend both on the number of GNN layers *k* and whether *k* is even or odd. We experimentally validate our theoretical findings in a synthetic node classification task and investigate its practical relevance in 23 real-world graphs.
Submission Type: Extended abstract (max 4 main pages).
Agreement: Check this if you are okay with being contacted to participate in an anonymous survey.
Software: https://github.com/M-Lampert/self-loop-paradox
Poster: png
Poster Preview: png
Submission Number: 105
Loading