Abstract: Knowledge Graph Recommendation (KGR), which aims to incorporate Knowledge Graphs (KGs) as auxiliary information into recommender systems and effectively improve model performance, has attracted considerable interest. Currently, KGR community has focused on designing Graph Neural Networks (GNNs)-based end-to-end KGR models. Unfortunately, existing GNNs-based KGR models are focused on extracting high-order attributes (knowledge) but suffer from restrictions in several vital aspects, such as 1) neglect of finer-grained feature interaction information via GNNs and 2) lack of adequate supervised signals, leading to undesirable performance. To tackle these gaps, we propose a novel Dual-view Self-supervised Co-training for Knowledge Graph Recommendation (DSCKG). We consider two different views, covering user-item collaborative view and KGs structural view. Precisely, for the collaborative view, we first extract high-order collaborative user/item representations with GNNs. Next, we impose a discrepancy regularization term to augment the self-discrimination of the user/item representations. As for the structural view, we initially utilize GNNs to extract high-order features. Next, we utilize novel Dual-core Convolutional Neural Networks to extract bit- and vector-level finer-grained feature interaction signals. DSCKG hence performs a high-quality self-supervised co-training paradigm across dual views, improving the node representation learning capability. Experimental results demonstrate DSCKG achieves remarkable improvements over SOTA methods.
0 Replies
Loading