ON RECOVERABILITY OF GRAPH NEURAL NETWORK REPRESENTATIONSDownload PDF

Published: 25 Mar 2022, Last Modified: 29 Sept 2024GTRL 2022 PosterReaders: Everyone
Keywords: Graph Neural Networks
TL;DR: We propose the notion of recoverability, which is tightly related to information aggregation in GNNs,and based on this concept, develop the method for GNN embedding analysis.
Abstract: Despite their growing popularity, graph neural networks (GNNs) still have multiple unsolved problems, including finding more expressive aggregation methods, propagation of information to distant nodes, and training on large-scale graphs. Understanding and solving such problems require developing analytic tools and techniques. In this work, we propose the notion of \textit{recoverability}, which is tightly related to information aggregation in GNNs, and based on this concept, develop the method for GNN embedding analysis. Through extensive experimental results on various datasets and different GNN architectures, we demonstrate that estimated recoverability correlates with aggregation method expressivity and graph sparsification quality. The code to reproduce our experiments is available at \url{https://github.com/Anonymous1252022/Recoverability}.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/on-recoverability-of-graph-neural-network/code)
1 Reply

Loading