ULMRec: User-centric Large Language Model for Sequential Recommendation

ACL ARR 2025 May Submission900 Authors

16 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have demonstrated promising performance in sequential recommendation, leveraging their superior language understanding capabilities. However, most existing LLM-based recommendation models primarily capture sequential patterns between items and overlook the nuanced nature of individual user preferences, i.e., users with similar interaction histories demonstrate different interests. To alleviate this limitation, in this paper, we propose ULMRec, a framework that effectively integrates user personalized preferences into LLMs for sequential recommendation. For integrating the user personalized preference, we design two key components: (1) user indexing: a personalized user indexing mechanism that leverages vector quantization on user reviews and user IDs to generate meaningful and unique user representations, and (2) alignment tuning: an alignment-based tuning stage that employs comprehensive preference alignment tasks to enhance the model's capability for capturing personalized information. In this way, ULMRec achieves deeper integration of language semantics with user personalized preferences, facilitating effective adaptation to recommendation. Extensive experiments on two public datasets demonstrate that ULMRec significantly outperforms existing methods, validating the effectiveness of our approach. The code is available at https://anonymous.4open.science/r/ULMRec.
Paper Type: Long
Research Area: Human-Centered NLP
Research Area Keywords: user-centered design
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 900
Loading