Abstract: With the expansion of business scenarios, real recommender systems are facing challenges in dealing with the constantly emerging new tasks in multi-task learning frameworks. In this article, we attempt to improve the generalization ability of multi-task recommendations when dealing with new tasks. A novel two-stage prompt-tuning MTL framework (MPT-Rec) is proposed to address task irrelevance and training efficiency problems in multi-task recommender systems. Specifically, we disentangle the task-specific and task-sharing information in the multi-task pre-training stage and then use task-aware prompts to transfer knowledge from other tasks to the new task effectively. By freezing parameters in the pre-training tasks, MPT-Rec solves the negative impacts that may be brought by the new task and greatly reduces the training costs. Extensive experiments on three real-world datasets show the effectiveness of our proposed multi-task learning framework. MPT-Rec achieves the best performance compared to the SOTA multi-task learning method on three real-world datasets. Besides, it maintains comparable model performance but vastly improves the training efficiency (i.e., with up to 10% parameters in the full-training way) in the new task learning. Our code is publicly available at https://github.com/BAI-LAB/MPT-Rec.
External IDs:dblp:journals/tois/BaiHYYHZS25
Loading