Rethinking Temperature in Graph Contrastive LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: self-supervised learning, graph contrastive learning, uniformity
Abstract: Due to not relying on the rare human-labeled information, self-supervised learning, especially contrastive learning, attracted much attention from researchers. It has begun to show its strong advantages on both IID data (independent and identically distributed data, such as images and texts) and Non-IID data (such as nodes in graphs). Recently, researchers begin to explore the interpretability of contrastive learning and have proposed some metrics for measuring the learned representations' qualities of IID data, such as alignment, uniformity, and semantic closeness. It is very important to understand the relationships among node representations, which is helpful to design algorithms with stronger interpretability. However, few studies focus on evaluating good node representations in graph contrastive learning. In this paper, we investigate and discuss what a good representation should be for a general loss (InfoNCE) in graph contrastive learning. By theoretical analysis, we argue that global uniformity and local separation are both necessary to the learning quality. We find that the two new metrics can be regulated by the temperature coefficient in InfoNCE loss. Based on this characteristic, we develop a simple but effective algorithm GLATE to dynamically adjust the temperature value in the training phase. GLATE outperforms the state-of-the-art graph contrastive learning algorithms 2.8 and 0.9 percent on average under the transductive and inductive learning tasks, respectively. The code is available at: https://github.com/anonymousICLR22/GLATE.
One-sentence Summary: We argue that global uniformity and local separation are both necessary to the learning quality of graph contrastive learning, and develop a simple but effective GLATE algorithm.
20 Replies

Loading