CGCL: Collaborative Graph Contrastive Learning Without Handcrafted Graph Data Augmentations

Published: 01 Jan 2024, Last Modified: 20 May 2025DASFAA (6) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Existing graph contrastive learning (GCL) aims to learn the invariance across multiple augmentation views, which renders it heavily reliant on the handcrafted graph augmentations. However, inappropriate graph data augmentations can potentially jeopardize such invariance. In this paper, we show the potential hazards of inappropriate augmentations and then propose a novel Collaborative Graph Contrastive Learning framework (CGCL). This framework harnesses multiple graph encoders to observe the graph. Features observed from different encoders serve as the contrastive views in contrastive learning, which avoids inducing unstable perturbation and guarantees the invariance. To ensure the collaboration among diverse graph encoders, we propose the concepts of asymmetric architecture and complementary encoders as the design principle. To further prove the rationality, we utilize two quantitative metrics to measure the assembly of CGCL respectively. Extensive experiments demonstrate the advantages of CGCL in unsupervised graph-level representation learning and the potential of collaborative framework. The source code is available at https://github.com/zhangtia16/CGCL.
Loading