everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Recommender systems aim to deeply understand users' complex preferences based on their past interactions. Deep collaborative filtering paradigms, leveraging advanced neural architectures like Graph Neural Networks (GNNs), excel at capturing collaborative relationships among users. However, limitations emerge when dealing with sparse data or zero-shot learning from unseen datasets, due to the design constraints of ID-based embedding functions in existing solutions. These challenges hinder robust generalization and adaptability. To address this, we propose a model-agnostic recommendation instruction-tuning paradigm that integrates large language models with collaborative filtering. Our Recommendation Language Model (RecLM) is introduced to enhance the capability of capturing user preference diversity. We design a reinforcement learning reward function to facilitate self-augmentation of our language models. Comprehensive evaluations demonstrate significant advantages of our approach across various settings. It can be integrated as a plug-and-play component with state-of-the-art recommender systems, resulting in notable performance enhancements.