RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks

Published: 21 Sept 2023, Last Modified: 05 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Federated Learning, Model Poisoning Attacks, Proactive Detection, Robust Aggregation, Benign Outlier Identification
TL;DR: A new FL defense performs best in defending against the latest model poisoning attacks and solves the intractable problem of benign outlier identification.
Abstract: Model poisoning attacks greatly jeopardize the application of federated learning (FL). The effectiveness of existing defenses is susceptible to the latest model poisoning attacks, leading to a decrease in prediction accuracy. Besides, these defenses are intractable to distinguish benign outliers from malicious gradients, which further compromises the model generalization. In this work, we propose a novel defense including detection and aggregation, named RECESS, to serve as a “vaccine” for FL against model poisoning attacks. Different from the passive analysis in previous defenses, RECESS proactively queries each participating client with a delicately constructed aggregation gradient, accompanied by the detection of malicious clients according to their responses with higher accuracy. Further, RECESS adopts a newly proposed trust scoring based mechanism to robustly aggregate gradients. Rather than previous methods of scoring in each iteration, RECESS takes into account the correlation of clients’ performance over multiple iterations to estimate the trust score, bringing in a significant increase in detection fault tolerance. Finally, we extensively evaluate RECESS on typical model architectures and four datasets under various settings including white/black-box, cross-silo/device FL, etc. Experimental results show the superiority of RECESS in terms of reducing accuracy loss caused by the latest model poisoning attacks over five classic and two state-of-the-art defenses.
Supplementary Material: pdf
Submission Number: 7493
Loading