Abstract: Graph Neural Networks (GNNs) have emerged as promising tools in graph semi-supervised learning. They acquire low-dimensional node embeddings for downstream tasks by aggregating and updating features from neighboring nodes. However, in a semi-supervised setting, the availability of labeled node information is limited, resulting in the underutilization of vast amounts of unlabeled node information. Current research integrates contrastive learning with graph semi-supervised learning to better use this unlabeled node information. Nevertheless, the differences between cross-entropy and self-supervised contrastive loss are often overlooked. To address this issue, we propose a novel relaxed graph contrastive semi-supervised learning method. We analyze the difference between nodes correctly classified under cross-entropy loss and self-supervised contrastive loss. Inspired by this, we design a relaxed contrastive loss that considers the node features optimized under these two loss functions, thereby expanding the positive sample set. Furthermore, to ensure the quality of the positive sample set, we introduce a threshold constraint based on feature similarity and prediction results to select more reliable positive samples. The experimental results indicate that our method exhibits competitive performance across most datasets.
Loading