Keywords: Graph Representation, Contrastive Learning, Predictive Learning, Data Augmentation
Abstract: Self-supervised learning of graph-structured data aims to produce transferable and robust representations that could be transferred to the downstream tasks. Among many, graph contrastive learning (GCL) based on data augmentation has emerged with promising performance in learning graph representation. However, it is observed that some augmentations might change the graph semantics due to the perturbations in the graph structure such as perturbing some nodes/edges. In such cases, existing GCL methods may suffer from performance limitations due to the introduction of noise augmentations. To address this issue, we propose to train a discriminative model to enhance GCL for graph-structured data, called Perturbation Discrimination-Enhanced GCL (PerEG). Specifically, for each perturbed graph, the discriminative model is trained to predict whether each node in the augmentation was perturbed by the perturbation compared to the original graph or not. Based on this, the results of perturbation discrimination are exploited to refine the GCL, enabling its controllable use of augmentation, thereby preferably utilizing augmentation and effectively avoiding the introduction of noise augmentation. Extensive experiments in unsupervised, semi-supervised, and transfer learning scenarios show that our PerEG outperforms the state-of-the-art methods on eight datasets.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6654
Loading