Abstract: Federated learning (FL) is an increasingly popular privacy-preserving collaborative machine learning paradigm that enables clients to train a global model collaboratively without sharing their raw data. Despite its advantages, FL is vulnerable to untargeted Byzantine poisoning attacks in which malicious clients send incorrect model updates during training to disrupt the global model’s performance or prevent it from converging. Existing defenses based on anomaly detection typically rely on additional auxiliary datasets and assume a known and fixed proportion of malicious clients. To overcome these shortcomings, we propose FedMP, a multi-pronged defense algorithm against untargeted Byzantine poisoning attacks. FedMP’s primary idea is to detect anomalous variations in the magnitude and direction of model updates across communication rounds. In particular, FedMP first utilizes an adaptive scaling module to limit the impact of malicious updates with anomalous amplitudes. Then, FedMP identifies and filters malicious model updates with abnormal directions through dynamic clustering and partial filtering methods. Finally, FedMP extracts pure ingredients from the filtered updates as reputation scores for model aggregation to further reduce the influence of malicious updates. Comprehensive evaluations across three publicly accessible datasets demonstrate that FedMP significantly outperforms the existing Byzantine robust defenses under a high proportion of malicious clients (0.7 in our experiments) and high Non-IID degree (0.1 in our experiments) scenarios.
Loading