Abstract: Federated Learning (FL) enables multiple clients to train a model collaboratively without sharing their local data. Yet the FL system is vulnerable to well-designed Byzan-tine attacks, which aim to disrupt the model training pro-cess by uploading malicious model updates. Existing ro-bust aggregation rule-based defense methods overlook the diversity of magnitude and direction across different lay-ers of the model updates, resulting in limited robustness performance, particularly in non-IID settings. To address these challenges, we propose the Layer-Adaptive Sparsified Model Aggregation (LASA) approach, which combines pre-aggregation sparsification with layer-wise adaptive aggre-gation to improve robustness. Specifically, LASA includes a pre-aggregation sparsification module that sparsifies up-dates from each client before aggregation, reducing the im-pact of malicious parameters and minimizing the interfer-ence from less important parameters for the subsequent fil-tering process. Based on sparsified updates, a layer-wise adaptive filter then adaptively selects benign layers using both magnitude and direction metrics across all clients for aggregation. We provide a detailed theoretical robustness analysis of LASA and a resilience analysis of the FL inte-grated with LASA. Extensive experiments are conducted on various IID and non-IID datasets. The numerical results demonstrate the effectiveness of LASA. Code is available at https://github.com/JiiahaoXU/LASA.
Loading