Abstract: To provide precise recommendations, traditional recommender systems (RS) collect personal data, user preference and feedback, which are sensitive to each user if such information is maliciously used for extra analysis. In recent years, differential privacy (DP) has been widely applied in RS to provide privacy protection for sensitive information. Prior studies explored the combination of DP and RS, while neglecting the disparate effect on model accuracy of imbalanced subgroups as large user groups control the trained model, and DP can worsen the disparate effect of degrading the performance of recommender systems significantly. Besides, the number of uploaded contributions can differ among users for training a recommender system, so it is necessary to set the user-level privacy guarantee.In this paper, we make four contributions. First, we propose an efficient way of constructing datasets for training a recommender system based on prior theories. Second, we compute the user-level priors based on user metadata to optimize the VAE model. Besides, we add noise into the calculation process to protect user metadata. Third, we analyze and propose a tighter theoretical bound on gradient updates for DP Stochastic Gradient Descent (DPSGD). Finally, we exploit these theoretical results and propose a novel DP-VAE based recommender system. Extensive experimental results on multiple datasets show that our system can achieve high recommendation precision while maintaining a reasonable privacy guarantee.
Loading