Abstract: To mitigate the rising concern of privacy leakage, the federated recommender (FR) paradigm emerges as a potential solution, in which decentralized clients co-train the recommendation model without exposing their raw user-item rating data. The differentially private federated recommender (DPFR) further enhances the FR by injecting differentially private (DP) noises into clients' data. Yet, current DPFRs, suffering from noise distortion, cannot achieve the desired satisfactory accuracy. Various efforts have been dedicated to improving DPFRs by adaptively allocating the privacy budget over the learning process. However, due to the intricate relation between privacy budget allocation and model accuracy, existing attempts are still far from maximizing the DPFR accuracy. To address this challenge, we develop a BGTplanner (Budget Planner) to strategically allocate the privacy budget for each round of the DPFR training, improving overall training performance. Specifically, we leverage the Gaussian process regression and historical information to predict the change in the recommendation accuracy with a certain allocated privacy budget. Additionally, Contextual Multi-Armed Bandit (CMAB) is harnessed to make privacy budget allocation decisions by reconciling the current improvement and long-term privacy constraints. Our extensive experimental results on real datasets demonstrate that the BGTplanner achieves an average improvement of 6.76% in training performance compared to the state-of-the-art baselines.
External IDs:doi:10.1109/tsc.2025.3616355
Loading