Efficient Graph Representation Learning by Non-Local Information Exchange

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Non-local information exchange, Graph rewiring, Graph representation learning, Graph expressibility
TL;DR: In this work, we propose a novel graph re-wiring method that can plug into various GNNs to enable non-local information exchange, which leads to better expressibility of graph and node classification performance of GCN and graph transformers.
Abstract: Graph is an effective data structure to characterize ubiquitous connections as well as evolving behaviors that emerge from the inter-wined system. In spite of various deep learning models for graph data, the common denominator of current state-of-the-arts is finding a way to represent, or encode, graph entities (such as nodes and links) on top of the intricate wiring topology. Limited by the stereotype of node-to-node connections, learning global feature representations is often confined in a graph diffusion process where local information has been excessively aggregated as the random walk explores far-reach neighborhoods on the graph. In this regard, tremendous efforts have been made to alleviate feature over-smoothing issue such that current graph learning backbones can lend themselves in a deep network architecture. However, little attention has been paid to improving the expressive power of underlying graph topology, which is not only more relevant for the downstream applications but also more effective to mitigate the over-smoothing risk by reducing unnecessary information exchange on the graph. Inspired by the notion of non-local mean techniques in image processing area, we propose a non-local information exchange mechanism by establishing an express connection to the distant nodes, instead of propagating information along the (possibly very long) topological pathway node-after-node. Since the seek of express connections throughout the graph could be computationally expensive in real-world applications, we further present a hierarchical re-wiring framework (coined $express\ messenger$ wrapper) to progressively incorporate express links into graph learning in a local-to-global manner, which allows us to effectively capture multi-scale graph feature representations without using a very deep model, thus free of the over-smoothing challenge. We have integrated our $express\ messenger$ wrapper (as a model-agnostic plug-in) with existing graph neural networks (either using graph convolution or transformer backbones) and achieved SOTA performance on various graph learning applications.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6116
Loading