Multi-View Graph Neural Networks with Language Models for Mutli-Source Recommender Systems

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Contrastive Learning, Self-Supervised Learning, Language Models, Social trust, Textual Reviews, Recommender Systems
Abstract: Graph Neural Networks (GNNs) have become increasingly popular in recommender systems due to their ability to model complex user-item relationships. However, current GNN-based approaches face several challenges: They primarily rely on sparse user-item interaction data, which can lead to overfitting and limit generalization performance. Moreover, they often overlook additional valuable information sources, such as social trust and user reviews, which can provide deeper insights into user preferences and enhance recommendation accuracy. To address these limitations, we propose a multi-view GNN framework that integrates diverse information sources using contrastive learning and language models. Our method employs a lightweight Graph Convolutional Network (LightGCN) on user-item interactions to generate initial user and item representations. We use an attention mechanism for the user view to integrate social trust information with user-generated textual reviews, which are transformed into high-dimensional vectors using a pre-trained language model. Similarly, we aggregate all reviews associated with each item and use language models to generate item representations for the item view. We then construct an item graph by applying a meta-path to the user-item interactions. GCNs are applied to both the social trust network and the item graph, generating enriched embeddings for users and items. To align and unify these heterogeneous data sources, we employ a contrastive learning mechanism that ensures consistent and complementary representations across different views. Experimental results on multiple real-world datasets such as Epinions, Yelp, and Ciao demonstrate significant performance improvements over state-of-the-art methods.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9523
Loading