Abstract: Learning user preferences by modeling historical purchase behaviors has significantly succeeded in existing recommender systems. Most use trained models to make predictions for users, and they assume that the training data samples and test data sample sets come from the same distribution. However, in practical applications, the distribution of users’ true preferences may be more complicated, and data drift can easily make the trained model invalid on the test dataset. In this case, to accurately model user preferences based on their historical behavior, two difficulties need to be addressed. First, it is difficult to model various purchase behavior shift situations due to their complexity. Second, inferring users’ true preferences from the complicated shifting cases is challenging. To solve the above problems, we build a robust recommender system to predict possible user purchase shifts and make recommendations for users. First, we propose a simulating strategy to cover possible scenarios when user purchase behavior shifts. Second, we build a novel voting framework to ensure the robustness of predicting results based on learned preferences. Extensive experiments were conducted, and the results demonstrate the outstanding performance of the proposed method on MovieLens-1M and LastFM datasets, providing at most 37.31% and 30.95% relative performance gains, respectively.
Loading