Motif-Guided Multiview Contrastive Learning for Knowledge Graph-Enhanced Recommendation

Cheng Li, Yong Xu, Xin He, Yujun Zhu, Jinde Cao, Qun Fang

Published: 01 Jan 2025, Last Modified: 28 Nov 2025IEEE Transactions on Computational Social SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Contrastive learning (CL) has demonstrated exceptional capability in extracting supervised signals and mitigating noise, increasingly attracting interest for its application in knowledge graph-enhanced (KG) recommendation. Nevertheless, existing approaches predominantly rely on a single augmentation strategy to construct contrastive views, often failing to effectively balance feature perturbation with semantic preservation. To overcome this limitation, we propose motif-guided multiview contrastive learning (MMCL), a unified framework for contrastive augmentation. MMCL leverages diverse view generation strategies across the user–item bipartite graph and the KG, perturbing graph structures while preserving essential semantics to the greatest extent possible. Specifically, for collaborative signals, we devise a data augmentation mechanism to model the interaction dynamics between users and items. For semantic information, we introduce a novel graph augmentation technique by constructing an interactive motif KG, enabling semantic contrastive learning within localized views. Furthermore, joint enhancement of graph and interaction data allows the model to capture global topological features of nodes. MMCL integrates contrastive learning at both local and global levels, effectively embedding local semantics and global topology into the learned representations. Extensive experiments conducted on three publicly available datasets reveal that MMCL outperforms advanced methods, particularly in scenarios with sparse interactions and noisy KG, while significantly alleviating popularity bias in recommendation.
Loading