Interaction Privacy Vulnerability in Federated Recommendation and Lossless Countermeasure

Published: 01 Jan 2025, Last Modified: 07 Oct 2025ACM Trans. Inf. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Recommendation (FedRec) systems are recognized as privacy-preserving solutions for collaboratively training recommender models without sharing users’ private data. However, recent studies have revealed that FedRec systems are vulnerable to interaction-level membership inference attacks. In such attacks, a semi-honest server can employ crafted methods to infer users’ interacted items. In this article, we identify that user preference information is predominantly stored in the user-uploaded parameters rather than in the local parameters after local training. Leveraging this insight, we expose a new interaction vulnerability and introduce the PubPara attack. Our experiments show that PubPara improves the inference performance by at least 40% over existing attacks, while requiring minimal inference time and remaining robust against current defense methods. To safeguard user privacy without compromising recommender performance, we propose MultiVerse, a novel countermeasure. MultiVerse utilizes untrained items outside the user’s local training data to obfuscate the server’s inference of interacted items. It includes a four-step strategy (training, optimization, refinement, and denoising) to achieve robust defense. Extensive experiments on three representative FedRec models (F-NCF, F-LightGCN, and FedRAP) across three real-world datasets validate that MultiVerse significantly degrades the attack’s inference performance to near the level of random guess while maintaining lossless recommender performance.
Loading