EA-PS: Estimated Attack Effectiveness based Poisoning Defense in Federated Learning under Parameter Constraint Strategy

ICLR 2026 Conference Submission15944 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Backdoor Poisoning Defense, Robust Federated Learning
TL;DR: When facing persistent attacks, EA-PS constrain the perturbation range of local parameters while minimizing the long-lasting impact of attacks.
Abstract: Federated learning is vulnerable to poisoning attacks due to the characteristics of its learning paradigm. There are a number of server-based and client-based poisoning defense methods to mitigate the impact of the attack. However, when facing persistent attacks with long-lasting attack effects, defense methods fail to guarantee robust and stable performance. In this paper, we propose a client-side defense method, EA-PS, which can be effectively combined with server-side methods to address the above issues. The key idea of EA-PS is to constrain the perturbation range of local parameters while minimizing the impact of attacks. To theoretically guarantee the performance and robustness of EA-PS, we prove that our methods have an efficiency guarantee with a lower upper bound, a robustness guarantee with a smaller certified radius, and a larger convergence upper bound. Experimental results show that, compared with other client-side defense methods combined with different server-side defense methods under both IID and non-IID data distributions, EA-PS reduces more performance degradation, achieves lower attack success rates, and has more stable defense performance with smaller variance. Our code can be found at https://anonymous.4open.science/r/EA-SP-6BC9.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15944
Loading