Paper Link: https://openreview.net/forum?id=igVquoXGiLo
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: A key challenge of Conversational Recommendation Systems (CRS) is to integrate the recommendation function and the dialog generation function smoothly. Previous works employ graph neural networks with external knowledge graphs (KG) to model individual recommendation items and integrate KGs with language models through attention mechanism for response generation. Although previous approaches prove effective, there is still room for improvement. For example, KG-based approaches only rely on entity relations and bag-of-words to recommend items and neglect the information in the conversational context. We propose to improve the usage of dialog context for both recommendation and response generation using an encoding architecture along with the self-attention mechanism of transformers. In this paper, we propose a simple yet effective architecture comprising a pre-trained language model (PLM) and an item metadata encoder to integrate the recommendation and the dialog generation better. The proposed item encoder learns to map item metadata to embeddings reflecting the rich information of the item, which can be matched with dialog context. The PLM then consumes the context-aware item embeddings and dialog context to generate high-quality recommendations and responses. Experimental results on the benchmark dataset ReDial show that our model obtains state-of-the-art results on both recommendation and response generation tasks.
Presentation Mode: This paper will be presented virtually
Virtual Presentation Timezone: UTC-5
Copyright Consent Signature (type Name Or NA If Not Transferrable): Bowen Yang
Copyright Consent Name And Address: Columbia University, New York, NY 10027, United States
0 Replies
Loading