HidAttack: An Effective and Undetectable Model Poisoning Attack to Federated Recommenders

Published: 01 Jan 2025, Last Modified: 21 May 2025IEEE Trans. Knowl. Data Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Privacy concerns in recommender systems are potentially addressed due to constitutional and commercial requirements. Centralized recommendation models are susceptible to poisoning attacks, which threaten their integrity. In this context, federated learning has emerged as an optimal solution to privacy concerns. However, recent investigations proved that Federated Recommender Systems (FedRS) are also vulnerable to model poisoning attacks. Existing attack possibilities highlighted in academic literature require a large fraction of Byzantine clients to effectively influence the training process, which is unrealistic for practical systems with millions of users. Additionally, most attack models neglected the role of the defense mechanism running at the aggregation server. To this end, we propose a novel undetectable hidden attack strategy (HidAttack) for FedRS, aiming to raise the exposure ratio of targeted items with minimum Byzantine clients. To achieve this goal, we construct a cluster of baseline attacks, on top of which a bandit model is designed that intelligently infers effective poisoned gradients. It ensures a diverse pattern of poisoned gradients and therefore, Byzantine clients cannot be distinguished from benign clients by the defense mechanism. Extensive experiments demonstrate that: 1) our attack model significantly increases the target item's exposure rate covertly without compromising the recommendation accuracy and 2) the current defenses are insufficient, emphasizing the need for better security improvements against our model poisoning attack to FedRS.
Loading