NOISY MULTI-VIEW CONTRASTIVE LEARNING FRAMEWORK FOR ENHANCING TOP-K RECOMMENDATION

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Contrastive Learning, Recommendation Systems
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Recommender systems have become an essential component of various online plat- forms, providing personalized recommendations to users. Collaborative filtering- based methods, such as matrix factorization, have been widely used to capture latent user-item preferences. Recently, graph-based methods have shown promising results by modeling the interactions between users and items as a graph and lever- aging knowledge graphs (KG) to learn the user and item embeddings. Motivated by the recent success of contrastive learning in mining supervised signals from data itself, in this paper, we focus on establishing a noisy contrastive learning framework in Knowledge-aware recommendation systems and propose a self-supervised novel noisy multi-view contrastive learning framework for improving top-K recommen- dation. In this paper, we propose a novel recommendation system architecture that generates three different views of user-item interactions for improved recommenda- tion along with a noise addition module. The global-level structural view leverages attention-based aggregation network Wang et al. (2019d) to capture collaborative information in the entity-item-user graph. In the item-item semantic view, we use a K-nearest Neighbour item-item semantic module to incorporate semantic relations among items. In the local view, we apply LightGCN He et al. (2020) with noisy perturbations to generate robust user-item representations. We then use two more signals such as representation loss and uniformity loss in positive pairs to improve the quality of the representations and ensure uniform representations in the representational space. Experimental results on two benchmark datasets demonstrate that our proposed method achieves superior performance compared to state-of-the-art methods. Additionally, we conducted extensive experiments on CTR task-based datasets to demonstrate the robustness of our framework’s generalization in learning better user-item representations which can be seen in the supplementary material. All the codes to generate reproducible results are available in this anonymous repository.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7168
Loading