Abstract: Recently, graph neural networks(GNNs) have played a key crucial in many recommendation situations. In particular, contrastive learning-based hypergraph neural networks (HGNNs) are gradually becoming a research focus for addressing issues of data sparsity and noise. Despite many studies proving their outstanding performance, there are still some shortcomings: i) Most contrastive learning-based HGNNs primarily rely on cross-view contrastive learning, while neglecting contrastive learning on the interaction graph. ii) Utilizing node embedding for hypergraph structure learning is susceptible to the influence of low-quality representations, thereby constraining the learning capability of HGNNs. To address these issues, we offer a Semantic Similarity-based Graph Contrastive Learning framework (SSGCL), which aims to jointly learn representations with rich semantic information through within-view and cross-view contrastive learning. Specifically, we first introduce consistency contrastive learning, which captures self-supervised signals through semantic similarity between nodes and their neighborhoods. It better captures the unique features of nodes through the connections between nodes and their neighborhoods. Then, we utilize interaction graph learning hypergraph structure, promoting it to extract potential node connections and thus improving its ability to describe homogeneous node relationships. Experimental evaluations on three actual datasets show that SSGCL performs much better than the current baseline models.
Loading