Keywords: Generative Retrieval, Sequential Recommendation, Preference Discerning, LLM
TL;DR: We propose a new paradigm called preference discerning along with a benchmark and a new baseline and evaluate capabilities of state-of-the-art generative retrieval methods
Abstract: Sequential recommendation systems aim to provide personalized recommendations for users based on their interaction history. To achieve this, they often incorporate auxiliary information, such as textual descriptions of items and auxiliary tasks, like predicting user preferences and intent. Despite numerous efforts to enhance these models, they still suffer from limited personalization. To address this issue, we propose a new paradigm, which we term *preference discerning*. In *preference discerning*, we explicitly condition a generative sequential recommendation system on user preferences within its context. The user preferences are generated by large language models (LLMs) based on user reviews. To evaluate *preference discerning* capabilities of sequential recommendation systems, we introduce a novel benchmark that provides a holistic evaluation across various scenarios, including preference steering and sentiment following. We assess current state-of-the-art methods using our benchmark and show that they struggle to accurately discern user preferences. Therefore, we propose a new method named Mender (**M**ultimodal prefer**en**ce **d**iscern**er**), which improves upon existing methods and achieves state-of-the-art performance on our benchmark. Our results show that Mender can be effectively guided by human preferences, paving the way toward more personalized sequential recommendation systems. We will open-source the code and benchmarks upon publication.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9436
Loading