Augmentation with Neighboring Information for Conversational Recommendation

Published: 01 Jan 2025, Last Modified: 24 Aug 2025ACM Trans. Inf. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Conversational recommender systems (CRSs) suggest items to users by understanding their needs and preferences from natural language conversations. While users can freely express preferences, modeling needs and preferences solely from users’ conversations is challenging due to the sparsity of the available information. Prior work introduces external resources to enrich information expressed in conversations. Obtaining such resources is challenging and not always effective. Can learning intrinsic relations among conversations and items enhance information without the use of external resources? Inspired by collaborative filtering, we propose to use so-called neighboring relations within training data, i.e., relations between conversations, items, and similar conversations and items, to enhance our algorithmic understanding of CRSs.We propose a neighboring relations enhanced conversational recommender system (NR-CRS) and study how neighboring relations improve CRSs from two angles: (i) We mine preference information from neighboring conversations to enhance the modeling of user representations and learning of user preferences. (ii) We generate negative samples based on neighboring items to extend the data available for training CRSs. Experiments on the ReDial dataset show that neighboring relations enhanced conversational recommender system (NR-CRS) outperforms the state-of-the-art baseline by 11.3–20.6% regarding recommendation performance while generating informative and diverse responses. We also assess the capabilities of large language models (i.e., Llama 2, Llama 3, and Chinese-Alpaca2) for CRSs. While the generated responses exhibit enhanced fluency and informativeness, recommending target items with LLMs remains challenging; we recommend that LLMs be used as a decoding base for NR-CRS to generate relevant and informative responses.
Loading