SOLAR: Serendipity Optimized Language Model Aligned for Recommendation

ACL ARR 2025 May Submission2701 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recently, Large Language Models (LLMs) have shown strong potential in recommendation tasks due to their broad world knowledge and reasoning capabilities. However, applying them to serendipity-oriented recommendation remains challenging, mainly due to a domain gap of LLMs in modeling personalized user behavior and the scarcity of labeled serendipitous interactions. In this paper, we introduce \textbf{SOLAR} (\textbf{S}erendipity-\textbf{O}ptimized \textbf{L}anguage model \textbf{A}ligned for \textbf{R}ecommendation), a two-stage framework that addresses these challenges. To alleviate label scarcity, we adopt a weak supervision strategy: a sequential ID-based recommender generates candidate items, which are then reranked by an LLM acting as a preference judge to produce serendipity-aware pseudo-labels. To bridge the domain gap, we propose a domain-adaptive instruction tuning method (SUN) that aligns LLMs with recommendation tasks. Experiments on three real-world datasets show that \textbf{SOLAR} consistently improves both accuracy and serendipity over strong baselines, showing its effectiveness in enabling more diverse, user-centric recommendations. Code and dataset are released at https://github.com/SOLAR2025ARR/SOLAR
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Language Modeling,NLP Applications,Recommendation Systems,Serendipity Optimization,Large Language Models,Prompting Strategies,User-Centric Recommendations
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Keywords: Language Modeling, NLP Applications, Recommendation Systems, Serendipity Optimization, Large Language Models, Prompting Strategies, User-Centric Recommendations
Submission Number: 2701
Loading