Multi-task Recommendation in Marketplace via Knowledge Attentive Graph Convolutional Network with Adaptive Contrastive Learning

Published: 2024, Last Modified: 14 Nov 2025IEEE Big Data 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Marketplaces with multiple sellers have progressively evolved into viable business models in many web applications. Within this sphere, a marketplace recommendation model provides personalized suggestions regarding items and sellers that correspond with users’ preferences. However, a majority of the popular recommendation models are item-centric, often neglecting the incorporation of user preferences to sellers, thereby undermining the comprehensive utilization of the seller-related information contained in the marketplace datasets.To deal with the aforementioned limitations, this study presents a novel model for marketplace recommendations. It employs multi-task learning to jointly recommend items and sellers to users. Here, we introduce the KAROL, a model comprised of two principal modules. The first module is a Knowledge Attentive Graph Convolutional Network (KAGCN) structure based on the user-item-seller knowledge graph (KG). Specifically, relation-aware graph attention and LightGCN are employed to learn node embeddings of users, items and sellers. Sellers and items function reciprocally as knowledge bases for the bipartite graphs to transfer the knowledge between different tasks. It further employs dual losses to concurrently generate recommendations for both items and sellers. The second module is Adaptive Contrastive Learning (ACL), which involves a contrastive loss that incorporates three schemes for data augmentation: cross-relation sampling, edge-dropping and noise addition, to address knowledge sharing, structural consistency, and robustness challenges. An additional innovative facet is the incorporation of an adaptive temperature that is automatically optimized for contrastive loss without manual hyperparameter tuning. The experiment results on three datasets demonstrate that our model outperforms nine baseline models on both item and seller recommendation tasks.
Loading