Abstract: Graph representation learning aims to preserve graph topology when mapping nodes to vector representations, enabling downstream tasks like node classification and community detection. However, most graph neural network models require extensive labelled data, limiting their practical applicability. To address this, researchers have explored Graph Contrastive Learning (GCL), which uses enhanced graph data and contrastive learning to better capture graph structure and features, providing new avenues for solving real-world problems with limited labelled data. Building on this, this work proposes Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning (GRE2-MDCL). GRE2-MDCL first globally and locally augments the input graph using SVD and LAGNN. The enhanced data is then fed into a triple network with a multi-head attention GNN as the core model. Finally, GRE2-MDCL constructs a multidimensional contrastive loss, incorporating cross-network, cross-view, and neighbor contrast, to optimize the model. Evaluated on Cora, Citeseer, and PubMed, GRE2-MDCL achieves average accuracies of 83.1%, 72.6%, and 82.7%, outperforming baseline GCL models. Visualizations also show tighter intra-cluster aggregation and clearer inter-cluster boundaries, demonstrating the framework’s effectiveness in improving upon the baseline.
Loading