Positive Mining in Graph Contrastive Learning

25 Sept 2024 (modified: 26 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Contrastive Learning, Unsupervised representation learning, Mixture model, loss functions
Abstract: Graph Contrastive Learning (GCL), which aims to capture representations from unlabeled graphs, has made significant progress in recent years. In GCL, InfoNCE-based loss functions play a crucial role by ensuring that positive node pairs—those that are similar—are drawn closer together in the representational space, while negative pairs, which are dissimilar, are pushed apart. The primary focus of recent research has been on refining the contrastive loss function, particularly by adjusting the weighting of negative nodes. This is achieved by changing the weight between negative node pairs, or by using node similarity to select the positive node associated with the anchor node. Despite the substantial success of these GCL techniques, there remains a belief that the nodes identified as positive or negative may not accurately reflect the true positives and negatives. To tackle this challenge, we introduce an innovative method known as Positive Mining Graph Contrastive Learning (PMGCL). This method consists in calculating the probability of positive samples between the anchor node and other nodes using a mixture model, thereby identifying nodes that have a higher likelihood of being true positives in relation to the anchor node. We have conducted a comprehensive evaluation of PMGCL on a range of real-world graph datasets. The experimental findings indicate that PMGCL significantly outperforms traditional GCL methods. Our method not only achieves state-of-the-art results in unsupervised learning benchmarks but also exceeds the performance of supervised learning benchmarks in certain scenarios.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4279
Loading