Abstract: Graph Contrastive Learning (GCL) aims to address the issue of label scarcity by leveraging graph structures to propagate labels from a limited set of labeled data to a broader range of unlabeled data. However, recent GCL methods often rely on uniform negative sample selection schemes, such as random sampling, which results in suboptimal performance. To tackle this challenge, we present GraphSaSe, a tailored approach specifically designed for graph contrastive learning. Our method introduces an innovative reinforcement learning strategy that translates the divergence between positive pairs into a reinforcement reward mechanism. This mechanism generates selection probabilities to dynamically guide the selection of negative samples during training. We explore the impact of negative sample selection at different stages in graph contrastive learning and analyze how the discount factor affects the reward mechanism in reinforcement learning. These studies enhance the overall performance of the model. Comprehensive experimentation across diverse real-world datasets validates the effectiveness of our algorithm, positioning it favorably against contemporary state-of-the-art methodologies.
Loading