Knowledge Graph Preference Contrastive Learning for Recommendation

Published: 16 Nov 2024, Last Modified: 26 Nov 2024LoG 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recommendation, Knowledge Graph, Contrastive Learning
TL;DR: In the proposed KPCL, preference learning and rationale attention are designed to track collaborative signals and identify informative triplets from macro and micro perspectives.
Abstract: Recent research has incorporated knowledge graphs to mitigate the issue of data sparsity in recommendations. However, while leveraging the rich information from knowledge graphs exhibits promising performance enhancements, it also introduces noise that potentially disrupts collaborative signals. To overcome this problem, we propose the Knowledge Graph Preference Contrastive Learning for Recommendation, namely KPCL. The preference learning method and rationale attention mechanism are designed to explicitly track collaborative signals and identify informative knowledge connections from both macro and micro perspectives. Specifically, preference learning is used to alleviate semantic dissonance in knowledge embeddings by exploring intent correlations in user-item interaction history, while the rationale attention restructures the knowledge graph by eliminating knowledge triplets with low attention scores as noise. By aggregating the information in the knowledge graph through the selected knowledge triplets, the task-unrelated noise presented would be filtered out, leading to enhanced performance for the knowledge-aware recommender system. Experimental results on three benchmark datasets demonstrate the superiority of KPCL over the state-of-the-art methods. The implementations for KPCL are available at https://github.com/HuiCir/KPCL.
Submission Type: Full paper proceedings track submission (max 9 main pages).
Poster: png
Poster Preview: png
Submission Number: 108
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview