Defense against local model poisoning attacks to byzantine-robust federated learning

Published: 01 Jan 2022, Last Modified: 09 Apr 2025Frontiers Comput. Sci. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The letter gives an effective defense paradigm to defend against local model poisoning attack in FL without auxiliary dataset, which further enhances the robust of Byzantine-robust aggregation rules to local model poisoning attack. The experiment results show that our defense scheme can obtain a better detection performance and take less detection time in local model poisoning attack. More technical details please refer to supplementary material.
Loading