Abstract: The emergence of Federated Learning (FL) has enabled privacy-preserving distributed machine learning, yet its vulnerability to poisoning attacks remains a critical challenge. Existing defense methods often rely on static aggregation rules or centralized verification mechanisms, which lack adaptability to dynamic adversarial behaviors and incur high computational costs. To address these limitations, this paper proposes the Federated Weighted Learning Algorithm (FWLA), a novel framework designed to mitigate poisoning attacks through client-specific weight adaptation and asynchronous collaboration. The core of FWLA lies in two components: (1) a residual testing mechanism that dynamically identifies malicious clients by analyzing deviations between local and global model updates, and (2) an asynchronous training protocol that allows clients to independently upload parameters, thereby avoiding synchronization bottlenecks. Extensive experiments on three benchmark datasets (CICIDS2017, UNSW-NB15, NSL-KDD) demonstrate FWLA’s superiority over state-of-the-art methods. Specifically, FWLA achieves 98.9% accuracy and reduces the false acceptance rate to 2.9% on CICIDS2017. The robustness analysis further reveals that FWLA maintains 83% accuracy even when 20% of clients are malicious, outperforming FedAvg and FedSGD by 12%. These improvements stem from FWLA’s ability to suppress poisoned updates through iterative weight adjustments, validated by ablation studies showing a 3.3% accuracy drop when removing residual testing. Nonetheless, FWLA has its limitations in that when the number of clients is large, it may lead to increased resource consumption, so future work will concentrate on developing strategies to reduce these resource costs.
External IDs:dblp:journals/ijcisys/NingZLXL25
Loading