Co-contrastive Learning for Multi-behavior RecommendationOpen Website

Published: 01 Jan 2022, Last Modified: 17 May 2023PRICAI (3) 2022Readers: Everyone
Abstract: Multi-behavior recommender system (MBR) typically utilizes multi-typed user interactive behaviors (e.g., view, add-to-cart and purchase) in learning user preference on target behavior (i.e., purchase). Existing MBR models suffer from sparse supervised signals, which may degrade recommendation performance to some extent. Inspired by contrastive learning’s recent success in mining additional supervision signals from raw data itself, in this work, we propose a non-sampling Co-Contrastive Learning (CCL) to enhance MBR. Technically, we first exploit the multi-behavior interaction graph to augment two views (interactive view and fold view) that capture both local and high-order structures simultaneously, and then two asymmetric graph encoders are performed over the two views, which recursively leverage the different structural information to generate ground-truth samples to collaboratively supervise each other by contrastive learning and finally high-level node embeddings are learned. Moreover, divergence constraint further improves co-contrastive learning performance. Extensive experiments on two real-world benchmark datasets demonstrate that CCL significantly outperforms the state-of-the-art methods of existing MBR.
0 Replies

Loading