Abstract: Graph Contrastive Learning (GCL), training the graph neural networks encoder by contrasting different views in a self-supervised way, has demonstrated remarkable efficacy in graph representation learning. However, most existing GCL approaches tend to be time- and memory-consuming because they require extensive node contrasts across the entire original graph. To address this, we present CGCL, a fast graph contrastive learning approach based on graph coarsening. It aims to train an encoder on coarse graphs with lower time and memory costs while performing comparably to one trained on the original graph. Specifically, we coarsened the original graph into a series of highly-informative, smaller-tractable coarse graphs to reduce their scale. We then designed a multi-scale contrastive learning paradigm in the multi-granularity space, incorporating coarse-coarse and coarse-fine contrast to efficiently capture global and hierarchical information. CGCL accelerates model training while ensuring that the learned node representations are comprehensive. Extensive experiments towards node classification on seven real world datasets demonstrate that CGCL can achieve competitive performance with lower time and memory costs. In particular, on the ogbn-mag dataset, compared to state-of-the-art methods, CGCL reduces time consumption by up to 89.06% and memory usage by up to 50.56%, while maintaining comparable performance.
Loading