Abstract: Large language models (LLMs) have recently received significant attention for their exceptional capabilities. Despite extensive efforts
in developing general-purpose LLMs that can be utilized in various natural language processing (NLP) tasks, there has been less research exploring their potential in recommender systems. In this paper, we propose a novel framework, named PALR (Personalization
Aware LLMs for Recommendation), aimed at integrating user history behaviors (such as clicks, purchases, ratings, etc.) with LLMs
to generate user preferred items. Specifically, we first use user/item interactions as guidance for candidate retrieval, and then adopt an
LLM-based ranking model to generate recommended items. Unlike existing approaches that commonly rely on off-the-shelf LLMs
for zero/few-shot inference or fine-tune with small-sized language models (with less than 1 billion parameters), which cannot fully
elicit LLMs’ reasoning abilities and rich item side parametric knowledge, we fine-tune an LLM of 7 billion parameters for the ranking
purpose. This model takes retrieval candidates in natural language format as input, with instructions explicitly asking to select results from the input candidates based on user history behaviors during inference. Our experimental results demonstrate that our solution outperforms state-of-the-art models on various sequential recommendation tasks.
0 Replies
Loading