IGCN: Item Influence Enhanced Graph Convolution Networks for Recommendation of Cold-Start Items

Published: 01 Jan 2023, Last Modified: 21 Jan 2025ICDM (Workshops) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recommender systems supported by collaborative filtering (CF) have recently been extensively deployed in various e-commerce platforms. However, CF models suffer from the cold-start issue, i.e., limited preference history of cold-start users/items causes inaccurate similarity measurements. Addressing the cold-start issue has attracted considerable attention. However, existing work mainly focuses on the user cold-start issue and pays less attention to the item cold-start issue, and their performance is still unsatisfactory. To fill this gap, in this paper, we aim to address the item cold-start issue and propose a novel recommendation framework called Item Influence Enhanced Graph Convolution Networks (IGCN). We handle the item cold-start issue via influence-aware item representation learning. In particular, we mine the item influence signal to perform item influence graph modeling. Then we propose an influence-aware neural item aggregation module to capture high-order item-item influence signal for item representation via iteratively aggregating the influence-related item context. With the help of influence-aware item representation learning, both local and global item influence signals are captured to enrich the item representation, even for the cold-start item. Furthermore, we propose a residual-connected factorization machine to join the user embedding and item embedding for scoring and prediction incorporating second-order interactions. Empirical results on three benchmark recommendation datasets demonstrate significant performance-boosting compared to existing state-of-the-art methods in not only cold-start setting (up to 14% gain) but also regular setting (up to 17% gain). The further efficient study also shows that the proposed method has lower complexities in time (up to 44% drop in epoch number and up to 27% drop in training time) and space (up to 14% drop in graph size).
Loading