EA-PS: Estimated Attack Effectiveness based Poisoning Defense in Federated Learning under Parameter Constraint Strategy
TL;DR: EA-PS method minimizes the long-lasting backdoor attack effect with a parameter constraint strategy to enhance stability.
Abstract: Federated learning is vulnerable to poisoning attacks due to the characteristics of its learning paradigm. There are a number of server-based and client-based backdoor defense methods to mitigate the impact of the attack. However, when facing persistent adaptive attacks with long-lasting attack effects, defense methods fail to guarantee robust and stable performance. In this paper, we propose a client-side defense method, EA-PS, which can be effectively combined with server-side methods to address the above issues. The key idea of EA-PS is to constrain the perturbation range of local parameters while minimizing the impact of attacks. To theoretically guarantee the performance and robustness of EA-PS, we prove that our methods have an efficiency guarantee with a lower upper bound, a robustness guarantee with a smaller certified radius, and a larger convergence upper bound. Experimental results show that, compared with other client-side defense methods combined with different server-side defense methods under both IID and non-IID data distributions, EA-PS achieves lower attack success rates and more stable defense performance with smaller variance. Our code can be found at https://anonymous.4open.science/r/EA-SP-6BC9.
Primary Area: Social Aspects->Security
Keywords: "Backdoor Poisoning Defense", "Robust Federated Learning"
Submission Number: 3858
Loading