GraphECL: Towards Efficient Contrastive Learning for Graphs

24 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Graph Neural Networks
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Due to the inherent label scarcity, learning useful representations on graphs with no supervision is of great benefit. Yet, existing graph self-supervised learning methods overlook the scalability challenge and fail to conduct fast inference of representations in latency-constrained applications due to the intensive message passing of graph neural networks. In this paper, we present GraphECL, a simple and efficient contrastive learning paradigm for graphs. To achieve inference acceleration, GraphECL does not rely on graph augmentations but introduces cross-model contrastive learning, where positive samples are obtained through \MLP and \GNN representations from the central node and its neighbors. We provide theoretical analysis on the design of this cross-model framework and discuss why our \MLP can still capture structure information and enjoys better downstream performance as \GNN. Extensive experiments on common real-world tasks verify the superior performance of \simper compared to state-of-the-art methods, highlighting its intriguing properties, including better inference efficiency and generalization to both homophilous and heterophilous graphs. On large-scale datasets such as Snap-patents, the \MLP learned by GraphECL is 286.82x faster than GCL methods with the same number of \GNN layers.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8753
Loading