On Reconstructability of Graph Neural Networks

19 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Graph Neural Network, Reconstructability
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: This paper investigates the expressive power of GNN in the form of reconstructing the input graph.
Abstract: Recently, the expressive power of GNNs has been analyzed based on their ability to determine if two given graphs are isomorphic using the WL-test. However, previous analyses only establish the expressiveness of GNNs for graph-level tasks from a global perspective. In this paper, we analyze the expressive power of GNNs in terms of Graph Reconstructability, which aims to examine whether the topological information of graphs can be recovered from a local (node-level) perspective. We answer this question by analyzing how the output node embeddings extracted from GNNs may maintain important information for reconstructing the input graph structure. Moreover, we generalize GNNs in the form of Graph Reconstructable Neural Network (GRNN) and explore Nearly Orthogonal Random Features (NORF) to retain graph reconstructability. Experimental results demonstrate that GRNN outperforms representative baselines in reconstructability and efficiency.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1790
Loading