Abstract: Federated learning (FL), a distributed machine learning (ML) framework, is susceptible to Byzantine attacks since the attacker can manipulate clients’ local data or models to compromise the performance of the global model. There has been a wealth of defenses developed to mitigate the attacks by limiting the impact of malicious models. Nevertheless, the attacker can easily circumvent these approaches that rely solely on a single server-side defense, stemming from the high dimensionality of models and the variety of Byzantine attacks. Therefore, we propose Basalt, a Byzantine-robust FL framework with a server-client joint defense mechanism that enables multiple clients to train a global ML model under Byzantine attacks. On the client side, we design an efficient self-defense approach with model-level penalty loss that restricts local-benign divergence and decreases local-malicious correlation to prevent misclassification. On the server side, we present an efficient defense strategy based on the manifold and maximum clique, further strengthening the FL’s resilience against Byzantine attacks. We provide theoretical guarantees for global model convergence in FL with Byzantine attacks. Our extensive experiments demonstrate that Basalt outperforms existing state-of-the-art works. Especially, it achieves nearly 100% accuracy for detecting malicious clients in nonindependent and identically distributed (Non-IID) MNIST datasets under various Byzantine attacks.
External IDs:dblp:journals/iotj/SongZCCZS25
Loading