Abstract: State-of-the-art self-supervised representation learning methods for Graphs are typically based on contrastive learning (CL) principles.
These CL objective functions can be posed as a supervised discriminative task using *'hard'* labels that consider any minor augmented pairs of graphs as 'equally positive'. However, such a notion of 'equal' pairs is incorrect for graphs as even a smaller 'discrete' perturbation may lead to large semantic changes that should be carefully encapsulated within the learned representations. This paper proposes a novel CL framework for GNNs, called *Teacher-guided Graph Contrastive Learning (TGCL)*, that incorporates 'soft' pseudo-labels to facilitate a more regularized discrimination. In particular, we propose a teacher-student framework where the student learns the representation by distilling the teacher's perception. Our TGCL framework can be adapted to existing CL methods to enhance their performance. Our empirical findings validate these claims on both inductive and transductive settings across diverse downstream tasks, including molecular graphs and social networks. Our experiments on benchmark datasets demonstrate that our framework consistently improves the average AUROC scores for molecules' property prediction and social network link prediction. Our code is available at: https://github.com/jayjaynandy/TGCL.
Submission Length: Regular submission (no more than 12 pages of main content)
Video: https://youtu.be/WAG3W63m8g8
Code: https://github.com/jayjaynandy/TGCL
Assigned Action Editor: ~Ying_Wei1
Submission Number: 3032
Loading