Aegis: Post-Training Attribute Unlearning in Federated Recommender Systems against Attribute Inference Attacks

Published: 29 Jan 2025, Last Modified: 29 Jan 2025WWW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Systems and infrastructure for Web, mobile, and WoT
Keywords: Federated Unlearning;
Abstract: As privacy concerns in recommender systems become increasingly prominent, federated recommender systems (FedRecs) have emerged as a promising distributed training paradigm. FedRecs enable the collaborative training of a shared global recommendation model without requiring the exchange of raw client interaction data. However, models trained using standard FedRec methods remain vulnerable to personal information leakage, particularly through attribute inference attacks, which can expose sensitive user attributes such as gender and race. In this paper, we address these user sensitive attributes as targets for federated unlearning. To protect users' sensitive information, attribute unlearning aims to eliminate sensitive attributes from user embeddings, thereby preventing inference attacks while preserving recommendation performance. We introduce a novel post-training federated unlearning framework, Aegis, which performs unlearning based on private attribute requests after the model has been trained, minimizing the degradation in recommendation accuracy. Aegis employs an information-theoretic multi-component loss function to balance privacy protection and recommendation performance. Additionally, Aegis adapts to scenarios where training interaction data may be unavailable, reflecting real-world centralized protection scenarios. Comprehensive evaluations on various benchmark datasets demonstrate that our proposed method effectively safeguards user privacy while maintaining high-quality recommendations.
Submission Number: 251
Loading