EGCL: Efficient and Expressive Contrastive Learning on Graph Neural Networks

Published: 01 Jan 2024, Last Modified: 03 Feb 2025ICDE 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, graph contrastive learning proposes to learn node representations from the unlabeled graph to alleviate the heavy reliance on node labels in graph neural networks (GNNs). The core idea is to generate diverse positive views and negative views according to local subgraphs. Then, GNNs take these views as supervised signals and train the model by maximizing the similarity between positive view pairs of each node and minimizing the similarity between positive and negative views. Regardless of the fruitful progress, existing graph contrastive learning approaches still suffer from low-efficiency, insufficient-expressivity, and unpreserved-locality issues. First, they train GNNs by all nodes, reducing the efficiency due to similar and redundant nodes. Second, they only use limited operations (e.g., edge deletion and feature masking) to generate positive views, thereby restricting their expressivity. Third, they uniformly delete edges and mask node features and may modify important edges and features, thereby damaging the important locality information of nodes. In this paper, we propose an efficient and expressive contrastive learning framework for GNNs, namely E2GCL. Specifically, given a limited node budget, we select a set of representative nodes instead of all nodes to accelerate the GNNs training. Besides, we use three general operations (edge deletion, edge addition, and feature perturbation) to generate expressive and locality-preserved positive views based on edge and feature importance. Extensive experiments on various real-world datasets demonstrate the superior effectiveness and efficiency of our proposed E2GCL.
Loading