Abstract: Graph self-supervised learning aims to mine useful information from unlabeled graph data, and has been successfully applied to
pre-train graph representations. Many existing approaches use contrastive learning to learn powerful embeddings by learning contrastively from two augmented graph views. However, none of these graph contrastive methods fully exploits the diversity of different augmentations, and hence is prone to overfitting and limited generalization ability of learned representations. In this paper, we propose a novel Graph Self-supervised Learning method with Augmentation-aware Contrastive Learning. Our method is based on the finding that the pre-trained model after adding augmentation diversity can achieve better generalization ability. To make full use of the information from the diverse augmentation method, this paper constructs new augmentation-aware prediction task which complementary with the contrastive learning task. Similar to how pre-training requires fast adaptation to different downstream tasks, we simulate train-test adaptation on the constructed tasks for further enhancing the learning ability; this strategy can be deemed as a form of meta-learning. Experimental results show that our method outperforms previous methods and learns better representations for a variety of downstream tasks.
0 Replies
Loading