SOLAR: Serendipity Optimized Language Model Aligned for Recommendation

ACL ARR 2025 February Submission4851 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recently, Large Language Models (LLMs) possess broad world knowledge and show potential in diversifying recommendations. They face two key challenges: a domain gap in capturing user behavior patterns and a scarcity of human-labeled data for serendipitous recommendations. In this paper, we propose \textbf{SOLAR}, a serendipity-optimized language model aligned for recommendation, which bridges these gaps through a three-step process. First, we train a ID-based model that balances accuracy and serendipity via human-centric labels. We then generate large-scale, high-quality fine-tuning data via a two-step prompting strategy using an LLM-based reranker. Finally, we construct a recommendation-specialized unified tuning network (\textbf{SUN}) to align the LLM with recommendation tasks using domain-adaptive instructions. Experiments across multiple datasets demonstrate that \textbf{SOLAR} consistently outperforms baseline models in both accuracy and serendipity, offering a promising solution to break free from filter bubbles and promote more diverse, user-centric recommendations.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Language Modeling, NLP Applications, Recommendation Systems, Interpretability for NLP, Serendipity Optimization, Large Language Models, Prompting Strategies, User-Centric Recommendations
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 4851
Loading