Graphs get personal: learning representation with contextual pretraining for collaborative filtering
Abstract: The interactions of users and items in recommender systems can be naturally modeled as user-item bipartite graphs. The iterative propagation of graph neural networks (GNN) can explicitly exploit high-order connectivity from those user-item interactions. Apart from these advantages, there are two limitations in GNN-based recommendation systems that might lead to performance degradation: 1) Existing GNN methods only depend on the graph topology but ignore the insightful relationship between the connected nodes. The edge representation in GNNs cannot effectively express the personalized information of users and items, and can only represent the structural connection information of the graph. 2) The representations of nodes and edges were initialized randomly, these initial representations participate in subsequent node propagation and updating computations in the graph neural network. It directly affects the final representation of the user and item nodes in the network and result in bad recommendation performence. To address these issues, this study proposes a graph attention network with contextual pretraining (GAT-CP) for content-based collaborative filtering. It explicitly exploits the user-item graph structure twofold. First, an contextual personalized sentiment analysis task was applied by fine-tuning the BERT model to initialize the representations of nodes and edges by investigating the user preference for products based on the reviews of the users. Second, the obtained edge representations were used as the propagation constraints to assign different weights to the edges in GAT. Comparative results show the significant performance gains of GAT-CP and the necessity of node and edge initialization with contextual tasks. The code for this paper is available at: https://github.com/Yellow4Submarine7/GAT_AP
Loading