Graph Convolution Network based Recommender Systems: Learning Guarantee and Item Mixture Powered StrategyDownload PDF

Published: 31 Oct 2022, Last Modified: 15 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: generalization ability, recommender system, graph learning
TL;DR: This paper establishes a generalization guarantee for GCN-based recommendation models under inductive and transductive learning settings. Based on the theoretical understanding, we propose a Item Mixture strategy to enhance recommendation.
Abstract: Inspired by their powerful representation ability on graph-structured data, Graph Convolution Networks (GCNs) have been widely applied to recommender systems, and have shown superior performance. Despite their empirical success, there is a lack of theoretical explorations such as generalization properties. In this paper, we take a first step towards establishing a generalization guarantee for GCN-based recommendation models under inductive and transductive learning. We mainly investigate the roles of graph normalization and non-linear activation, providing some theoretical understanding, and construct extensive experiments to further verify these findings empirically. Furthermore, based on the proven generalization bound and the challenge of existing models in discrete data learning, we propose Item Mixture (IMix) to enhance recommendation. It models discrete spaces in a continuous manner by mixing the embeddings of positive-negative item pairs, and its effectiveness can be strictly guaranteed from empirical and theoretical aspects.
Supplementary Material: zip
14 Replies

Loading