Personalized Prompt for Sequential Recommendation

Published: 01 Jan 2024, Last Modified: 01 Oct 2024IEEE Trans. Knowl. Data Eng. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Pre-training models have shown their power in sequential recommendation. Recently, prompt has been widely explored and verified for tuning after pre-training in NLP, which helps to more effectively and parameter-efficiently extract useful knowledge from pre-training models for downstream tasks, especially in cold-start scenarios. However, it is challenging to bring prompt-tuning from NLP to recommendation, since the tokens of recommendation (i.e., items) are million-level and do not have concrete explainable semantics, and the sequence modeling in recommendation should be personalized. In this work, we first introduce prompt to recommendation models and propose a novel Personalized prompt-based recommendation (PPR) framework for cold-start recommendation. Specifically, we build personalized soft prompt via a prompt generator based on user profiles, and enable a sufficient training on prompts via a new prompt-oriented contrastive learning. PPR is effective, parameter-efficient, and universal in various tasks. In both few-shot and zero-shot recommendation tasks, PPR models achieve significant improvements over baselines in three large-scale datasets. We also verify PPR's universality in adopting different recommendation models as the backbone. Finally, we explore and confirm the capability of PPR on other tasks such as cross-domain recommendation and user profile prediction, shedding lights on the promising future directions of better using large-scale pre-trained recommendation models.
Loading