An Empirical Study Towards Prompt-Tuning for Graph Contrastive Pre-Training in Recommendations

Published: 21 Sept 2023, Last Modified: 27 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: graph contrastive learning, prompt tuning, recommendation system
TL;DR: An empirical study focuses on the limitations of graph contrastive pre-training in recommendation tasks and leveraging the prompt-tuning method to address such limitations.
Abstract: Graph contrastive learning (GCL) has emerged as a potent technology for numerous graph learning tasks. It has been successfully applied to real-world recommender systems, where the contrastive loss and the downstream recommendation objectives are always combined to form the overall objective function. Such a strategy is inconsistent with the original GCL paradigm, where graph embeddings are pre-trained without involving downstream training objectives. In this paper, we innovatively propose a prompt-enhanced framework for GCL-based recommender systems, namely CPTPP, which can fully leverage the advantages of the original GCL protocol through prompt tuning. Specifically, we first summarise user profiles in graph recommender systems to automatically generate personalized user prompts. These prompts will then be combined with pre-trained user embeddings to conduct prompt-tuning in downstream tasks, thereby narrowing the distinct targets between pre-training and downstream tasks. Extensive experiments on three benchmark datasets validate the effectiveness of CPTPP against state-of-the-art baselines. A further visualization experiment demonstrates that user embeddings generated by CPTPP have a more uniform distribution, indicating a better capacity to model the diversity of user preferences. The implementation code is available online to ease reproducibility: https://anonymous.4open.science/r/CPTPP-F8F4
Supplementary Material: pdf
Submission Number: 2960
Loading