Defending Against Backdoor Attacks in Federated Learning Using Differential Privacy Combined with OOD Data Attributes
Abstract: While federated learning has significant privacy benefits, it is also vulnerable to backdoor attacks. Existing differential privacy-based defenses are effective against backdoor attacks, but they also significantly degrade the good performance of aggregated models. To address this shortcoming, we employ a backdoor detection mechanism that exploits the fact that backdoor samples are OOD samples relative to benign samples, effectively excludes malicious backdoor updates, and removes remaining backdoors by adding differential privacy. Experimental results on the CIFAR10 and FEMNIST datasets show that our proposed method can effectively remove backdoors and has a negligible impact on the benign performance of the model.
External IDs:doi:10.1007/978-981-96-5693-6_20
Loading