GCVR: Reconstruction from Cross-View Enable Sufficient and Robust Graph Contrastive Learning

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Network, Graph Self-supervised Learning
TL;DR: The paper proposes GCVR, a novel graph self-supervised learning framework with reconstruction task, that improve representation robustness without sacrificing its sufficiency.
Abstract: Among the existing self-supervised learning (SSL) methods for graphs, graph contrastive learning (GCL) frameworks usually automatically generate supervision by transforming the same graph into different views through graph augmentation operations. The computation-efficient augmentation techniques enable the prevalent usage of GCL to alleviate the supervision shortage issue. Despite the remarkable performance of those GCL methods, the InfoMax principle used to guide the optimization of GCL has been proven to be insufficient to avoid redundant information without losing important features. In light of this, we introduce the Graph Contrastive Learning with Cross-View Reconstruction (GCVR), aiming to learn robust and sufficient representation from graph data. Specifically, GCVR introduces a cross-view reconstruction mechanism based on conventional graph contrastive learning to elicit those essential features from raw graphs. Besides, we introduce an extra adversarial view perturbed from the original view in the contrastive loss to pursue the intactness of the graph semantics and strengthen the representation robustness. We empirically demonstrate that our proposed model outperforms the state-of-the-art baselines on graph classification tasks over multiple benchmark datasets.
Supplementary Material: zip
List Of Authors: Wen, Qianlong and Ouyang, Zhongyu and Zhang, Chunhui and Qian, Yiyue and Zhang, Chuxu and Ye, Yanfang
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 315
Loading