FLAD: Byzantine-Robust Federated Learning Based on Gradient Feature Anomaly Detection

Peng Tang, Xiaoyu Zhu, Weidong Qiu, Zheng Huang, Zhenyu Mu, Shujun Li

Published: 01 Jan 2025, Last Modified: 25 Jan 2026IEEE Transactions on Dependable and Secure ComputingEveryoneRevisionsCC BY-SA 4.0
Abstract: Federated Learning (FL) has gained significant attention due to its ability to jointly train global models by exchanging local gradients instead of raw local datasets. However, poisoning attacks have emerged as a severe threat to FL security, where malicious clients submit crafted gradients to compromise the integrity and availability of the model. Although researchers have worked on countering these attacks to achieve Byzantine-robust FL, it remains challenging to balance high accuracy, robustness, and efficiency simultaneously. We propose FLAD, a novel Byzantine-robust FL approach based on gradient feature anomaly detection, which is the first approach that uses neural networks to adaptively learn gradient features and measure feature similarity to counteract various types of poisoning attacks. Specifically, FLAD employs a small clean dataset to bootstrap trust and trains Feature Extraction Models (FEM). With FEM and DBSCAN clustering, abnormal gradients from malicious clients are detected and eliminated. Extensive experiments on both Non-IID and IID datasets demonstrate that FLAD achieves superior accuracy, robustness, efficiency, and generalizability compared to state-of-the-art approaches. Additionally, we implement privacy-preserving FLAD (PFLAD) using CKKS and Random Permutation techniques to ensure transmitted gradient privacy.
Loading