Discovering the Representation Bottleneck of Graph Neural Networks from Multi-order InteractionsDownload PDF

22 Sept 2022 (modified: 12 Mar 2024)ICLR 2023 Conference Desk Rejected SubmissionReaders: Everyone
Keywords: GNN bottleneck, graph rewiring, representation bottleneck, multi-order interactions
Abstract: Most graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions. Recent studies point out that different graph learning tasks require different ranges of interactions between nodes. In this work, we explore the capacity of GNNs to capture multi-order interactions between nodes, and the order represents the complexity of the context where interactions take place. We study two standard graph construction methods, namely, \emph{K-nearest neighbor} (KNN) graphs and \emph{fully-connected} (FC) graphs, and concentrate on scientific problems in the 3D Euclidean space. We demonstrate that the inductive bias introduced by KNN-graphs and FC-graphs prevents GNNs from learning interactions of the most appropriate complexity. We found that such a phenomenon is broadly shared by several GNNs for diverse graph learning tasks, so we name it a \emph{representation bottleneck}. To overcome that, we propose a novel graph rewiring approach based on interaction strengths of various orders to adjust the receptive fields of each node dynamically. Extensive experiments in molecular property prediction and dynamic system forecast prove the superiority of our method over state-of-the-art graph rewiring baselines. This paper provides a reasonable explanation of why subgraphs play a vital role in determining graph properties. The code is available at \url{https://github.com/smiles724/bottleneck}
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2205.07266/code)
1 Reply

Loading