AGCL: Adaptive Graph Contrastive Learning for graph representation learning

Published: 01 Jan 2024, Last Modified: 30 Sept 2024Neurocomputing 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Unsupervised graph representation learning has attracted great attention as it can learn low-dimensional and compact node embeddings from high-dimensional and sparse graph data without labels. However, two major drawbacks exist in most previous methods, i.e., limited applications of the global graph structure and the problem of the false-negative samples.To address the above problems, we propose a novel Adaptive Graph Contrastive Learning (AGCL) method that utilizes multiple graph filters to capture both the local and the global view information and propose a novel adaptive graph contrastive learning framework to alleviate the false-negative sample problem. AGCL first utilizes a Graph Convolution Network (GCN) filter and our designed Diffusion-based filters to smooth the initial node features. Then, AGCL measures the node pair similarity and then iteratively selects the similar and the dissimilar node pairs as the positive and the negative samples to do the graph contrastive learning. Finally, AGCL leverages various aggregators to obtain node embeddings from multiple views. In addition, we further propose AGCL-Light which can reduce the complexity of updating the training samples, which pre-selects a similar node subset for each node for the subsequent updates. Extensive experiments on nine benchmark datasets demonstrate that our models outperform previous state-of-the-art methods.
Loading