Abstract: Federated learning (FL) has emerged as a powerful approach to decentralize the training of machine learning algorithms, allowing the training of collaborative models while preserving the privacy of the datasets provided by different parties. Despite the benefits, FL is also vulnerable to adversaries, similar to other machine learning (ML) algorithms in centralized settings. For example, just a single malicious or faulty participant in an FL task can entirely compromise the performance of the model when using unsecure implementations. In this chapter, we provide a comprehensive analysis of the vulnerabilities of FL algorithms to different attacks that can compromise their performance. We describe a taxonomy of attacks comparing the similarities and differences with respect to centralized ML algorithms. Then, we describe and analyze different families of existing defenses that can be applied to mitigate these threats. Finally, we review a set of comprehensive attacks that aim to compromise the performance and convergence of FL.
0 Replies
Loading