UaMC: user-augmented conversation recommendation via multi-modal graph learning and context mining

Published: 01 Jan 2023, Last Modified: 18 Jun 2024World Wide Web (WWW) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Conversation Recommender System (CRS) engage in multi-turn conversations with users and provide recommendations through responses. As user preferences evolve dynamically during the course of the conversation, it is crucial to understand natural interaction utterances to capture the user’s dynamic preference accurately. Existing research has focused on obtaining user preference at the entity level and natural language level, and bridging the semantic gap through techniques such as knowledge augmentation, semantic fusion, and prompt learning. However, the representation of each level remains under-explored. At the entity level, user preference is typically extracted from Knowledge Graphs, while other modal data is often overlooked. At the natural language level, user representation is obtained from a fixed language model, disregarding the relationships between different contexts. In this paper, we propose User-augmented Conversation Recommendation via Multi-modal graph learning and Context Mining (UaMC) to address above limitations. At the entity level, we enrich user preference by leveraging multi-modal knowledge. At the natural language level, we employ contrast learning to extract user preference from similar contexts. By incorporating the enhanced representation of user preference, we utilize prompt learning techniques to generate responses related to recommended items. We conduct experiments on two public CRS benchmarks, demonstrating the effectiveness of our approach in both the recommendation and conversation subtasks.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview