Improving the Text Convolution Mechanism with Large Language Model for Review-Based Recommendation

Published: 15 Dec 2024, Last Modified: 18 Mar 20252024 IEEE International Conference on Big Data (Big Data)EveryoneCC BY 4.0
Abstract: Recent studies in recommender systems focus on addressing data sparsity and cold-start problems by utilizing side information, such as tags, images, and testimonials. Among these, user-written testimonials (purchase reviews) are precious for analyzing personal preferences, and many methods have been developed based on this context. Generally, existing methods apply 2D text convolution followed by selecting important words using the attention mechanism. However, the text convolution scheme inevitably suffers from information loss since the number of words in reviews commonly exceeds hundreds. To address this limitation, we focus on the Large Language Model (LLM), which has shown promising results in various fields, including search engines, natural language processing, and healthcare. In particular, LLM has demonstrated excellent performance in text summarization and QA tasks, leading to the development of text-based recommender systems. Nevertheless, LLM alone struggles to perform collaborative filtering, which is essential in a recommender system. Thus, we propose LLM-based text summarization before applying 2D convolution, followed by the widely used collaborative filtering mechanism. This approach can improve recommendation quality by removing unnecessary words in advance, reducing the smoothing effect while capturing the rich user-item interactions. Our method is integrated with recent text-based recommendation algorithms, which have proven to improve the quality of all baselines by about 16.9 % on average. We conduct experiments and ablation studies using benchmark datasets, demonstrating that our method is scalable and efficient.
Loading