Low-Rank Robust Graph Contrastive Learning

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Low Rank Robust Graph Contrastive Learning, Bayesian Nonparametric Method, Generation Bound, Transductive Learning
TL;DR: We propose Low-Rank Robust Graph Contrastive Learning, which performs transductive node classification with a robust GCL encoder and a novel low-rank transductive algorithm inspired by sharp generalization bound for transductive classification.
Abstract: Graph Neural Networks (GNNs) have been widely used to learn node representations and with outstanding performance on various tasks such as node classification. However, noise, which inevitably exists in real-world graph data, would considerably degrade the performance of GNNs revealed by recent studies. In this work, we propose a novel and robust method, Low-Rank Robust Graph Contrastive Learning (LR-RGCL). LR-RGCL performs transductive node classification in two steps. First, a robst GCL encoder named RGCL is trained by prototypical contrastive learning with Bayesian nonparametric Prototype Learning (BPL). Next, using the robust features produced by RGCL, a novel and provable low-rank transductive classification algorithm is used to classify the unlabeled nodes in the graph. Our low-rank transductive classification algorithm is inspired by the low frequency property of the graph data and its labels, and theoretical result on the generalization of our algorithm is provided. To the best of our knowledge, our theoretical result is among the first to demonstrate the advantage of low-rank learning in transductive classification. Extensive experiments on public benchmarks demonstrate the superior performance of LR-RGCL and the robustness of the learned node representations. The code of LR-RGCL is available at \url{https://anonymous.4open.science/r/LRR-GCL-3B3C/}.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6871
Loading