Abstract: Graph Neural Networks (GNNs) have achieved impressive success in recommender systems. However, the performance of GNNs is greatly affected by noisy data because of their distinct message propagation mechanism, which enlarges the adverse impact of noisy information on the representation learning. Although adversarial perturbation-based technique has been used to improve the resilience of models against noise, they still face several challenges when applied to GNN-based recommenders. Firstly, GNNs iteratively aggregate information from neighboring nodes, which amplifies the negative impact of noisy data on model training. Secondly, the uniform magnitude of perturbations to nodes potentially leads to the decreased or even polluted task-relevant semantic information for the generated user-item pairs (referred as adversarial examples). To address these challenges, we propose a simple but effective approach called RAP, which employs a two-stage learning framework. Specifically, in the first stage, we construct a weighted bipartite graph to model interaction’s confidence-score, which effectively blocks the spread of noise information in GNN. Furthermore, in the second stage, RAP introduces noise-aware adversarial perturbations for different nodes. It accurately captures the intrinsic robustness at the instance level to help mitigate task-relevant semantic discrepancies between the original and adversarial examples. We conduct extensive experiments on three datasets to demonstrate the effectiveness of our method.
Loading