Abstract: Federated machine learning (FL) is susceptible to poisoning attacks, where malicious clients embed backdoors into the global model. Even a single attacker can launch potent attacks, challenging existing defenses. This work proposes GuardFL, a novel defense against backdoor attacks in FL. GuardFL integrates majority consensus and client feedback mechanisms to detect and isolate malicious clients. Extensive testing demonstrates the effectiveness of GuardFL in identifying and isolating attackers. We evaluate GuardFL using three benchmark image classification datasets including CIFAR-10, MNIST, and FMNIST (Fashion MNIST). We believe our techniques yield a robust defense against backdoor poisoning in FL.
Loading