MDGCL: Debiased Graph Contrastive Learning with Knowledge of Model Discrepancy

TMLR Paper2355 Authors

08 Mar 2024 (modified: 14 Mar 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph contrastive learning (GCL) have shown promising results for self-supervised representation learning on graph-structured data, benefiting various downstream tasks such as node classification and graph classification. Despite their outstanding performance, a prevalent issue in most existing GCL methods is the arbitrary selection of other data points as negative samples, even when they share the same ground truth label with the anchor. The inclusion of such false negative samples could degrade the performance of GCL. In this study, we present a dual-branch ensembling learning framework, which provides model discrepancy as a crucial indicator to more effectively differentiate false negatives from true negatives. Building on this, we develop a debiased contrastive learning objective. This objective focuses on pulling false negatives closer to the anchor in the embedding space, while simultaneously retaining the capacity to repel true negatives away from the anchor. Extensive experiments on real-world datasets demonstrate the effectiveness of our framework.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Hanwang_Zhang3
Submission Number: 2355
Loading