ITEM: Improving Training and Evaluation of Message-Passing based GNNs for top-k recommendation

TMLR Paper2033 Authors

09 Jan 2024 (modified: 27 Feb 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: Graph Neural Networks (GNNs), especially message-passing-based models, have become prominent in top-k recommendation tasks, outperforming matrix factorization models due to their ability to efficiently aggregate information from a broader context. Although GNNs are evaluated with ranking-based metrics, e.g NDCG@k and Recall@k, they remain largely trained with proxy losses, e.g the BPR loss. In this work we explore the use of ranking loss functions to directly optimize the evaluation metrics, an area not extensively investigated in the GNN community for collaborative filtering. We take advantage of smooth approximations of the rank to facilitate end-to-end training of GNNs and propose a Personalized PageRank-based negative sampling strategy tailored for ranking loss functions. Moreover, we extend the evaluation of GNN models for top-k recommendation tasks with an inductive user-centric protocol, providing a more accurate reflection of real-world applications. Our proposed method significantly outperforms the standard BPR loss and more advanced losses across four datasets and four recent GNN architectures while also exhibiting faster training. Demonstrating the potential of ranking loss functions in improving GNN training for collaborative filtering tasks.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We added a comparison to a method published in the 2023 WWW conference; we also corrected notations in Section 3, paragraph "Training Context," as suggested by the second reviewer.
Assigned Action Editor: ~Alessandro_Sperduti1
Submission Number: 2033
Loading