everyone
since 20 Jul 2024">EveryoneRevisionsBibTeXCC BY 4.0
Graph Contrastive Learning (GCL) applied in real-world scenarios aims to alleviate label scarcity by harnessing graph structures to disseminate labels from a limited set of labeled data to a broad spectrum of unlabeled data. Recent advancements in amalgamating neural network capabilities with graph structures have demonstrated promising progress. However, prevalent GCL methodologies often overlook the fundamental issue of semi-supervised learning (SSL), relying on uniform negative sample selection schemes such as random sampling, thus yielding suboptimal performance within contexts. To address this challenge, we present GraphSaSe, a tailored approach designed specifically for graph representation tasks. Our model consists of two pivotal components: a Graph Contrastive Learning Framework (GCLF) and a Selection Distribution Generator (SDG) propelled by reinforcement learning to derive selection probabilities. We introduce an innovative strategy whereby the divergence between positive graph representations is translated into a reward mechanism, dynamically guiding the selection of negative samples during training. This adaptive methodology aims to minimize the divergence between augmented positive pairs, thereby enriching graph representation learning crucial for applications. Comprehensive experimentation across diverse real-world datasets validates the effectiveness of our algorithm, positioning it favorably against contemporary state-of-the-art methodologies.