Abstract: Recommender systems aim to predict user interest based on historical behavioral data.
They are mainly designed in sequential pipelines, requiring lots of data to train different sub-systems, and are hard to scale to new domains. Recently, Large Language Models (LLMs) have demonstrated remarkable generalized capabilities, enabling a singular model to tackle diverse recommendation tasks across various scenarios.
Nonetheless, existing LLM-based recommendation systems utilize LLM purely for a single task of the recommendation pipeline.
Besides, these systems face challenges in presenting large-scale item sets to LLMs in natural language format, due to the constraint of input length.
To address these challenges, we introduce an LLM-based end-to-end recommendation framework: UniLLMRec.
Specifically, UniLLMRec integrates multi-stage tasks (e.g. recall, ranking, re-ranking) via chain-of-recommendations.
To deal with large-scale items, we propose a novel strategy to structure all items into a semantic item tree, which can be dynamically updated and effectively retrieved.
UniLLMRec shows promising zero-shot results compared to supervised models, and it is highly efficient by reducing 86\% input tokens than LLM-based models.
Our code is available to ease reproduction.\footnote\url{https://anonymous.4open.science/r/UniLLMRec-E7AB/}}
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
0 Replies
Loading