LLM-Rec: Personalized Recommendation via Prompting Large Language Models

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: recommendation, large language model, input augmentation, personalization
Abstract: Text-based recommendation holds a wide range of practical applications due to its versatility, as textual descriptions can represent nearly any type of item. However, directly employing the original item descriptions as input features may not yield optimal recommendation performance. This limitation arises because these descriptions often lack comprehensive information that can be effectively exploited to align with user preferences. Recent advances in large language models (LLMs) have showcased their remarkable ability to harness commonsense knowledge and reasoning. In this study, we investigate diverse prompting strategies aimed at $\textit{augmenting the input text}$ to enhance personalized text-based recommendations. Our novel approach, coined $\textbf{LLM-Rec}$, encompasses four distinct prompting techniques: (1) basic prompting, (2) recommendation-driven prompting, (3) engagement-guided prompting, and (4) recommendation-driven + engagement-guided prompting. Our empirical experiments show that incorporating the augmented input text generated by the LLMs yields discernible improvements in recommendation performance. Notably, the recommendation-driven and engagement-guided prompting strategies exhibit the capability to tap into the language model's comprehension of both general and personalized item characteristics. This underscores the significance of leveraging a spectrum of prompts and input augmentation techniques to enhance the recommendation prowess of LLMs.
Supplementary Material: pdf
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2988
Loading