ON RECOVERABILITY OF GRAPH NEURAL NETWORK REPRESENTATIONSDownload PDF

02 Mar 2022, 12:21 (edited 04 Apr 2022)GTRL 2022 PosterReaders: Everyone
  • Keywords: Graph Neural Networks
  • TL;DR: We propose the notion of recoverability, which is tightly related to information aggregation in GNNs,and based on this concept, develop the method for GNN embedding analysis.
  • Abstract: Despite their growing popularity, graph neural networks (GNNs) still have multiple unsolved problems, including finding more expressive aggregation methods, propagation of information to distant nodes, and training on large-scale graphs. Understanding and solving such problems require developing analytic tools and techniques. In this work, we propose the notion of \textit{recoverability}, which is tightly related to information aggregation in GNNs, and based on this concept, develop the method for GNN embedding analysis. Through extensive experimental results on various datasets and different GNN architectures, we demonstrate that estimated recoverability correlates with aggregation method expressivity and graph sparsification quality. The code to reproduce our experiments is available at \url{https://github.com/Anonymous1252022/Recoverability}.
1 Reply

Loading