Defending Federated Learning from Backdoor Attacks: Anomaly-Aware FedAVG with Layer-Based Aggregation
Abstract: Federated Learning (FL) is susceptible to backdoor adversarial attacks during the training process, which poses a significant threat to the model's performance. Existing adversarial mitigation solutions mainly rely on the neural network (NN) model statistics and discard an entire client model if attacked. This approach is not feasible as it results in suboptimal performance. Hence, it is crucial to develop lightweight backdoor attack mitigation solutions that efficiently utilize clients' model statistics. To address this issue, we propose (Layer Based Anomaly Aware) LBAA-FedAVG, a modified version of the common aggregation mechanism FedAVG. Our proposed framework employs a clustering-based technique and addresses each NN layer individually. Depending on the type of adversarial attack, this method selectively eliminates one or multiple layers of the NN during the aggregation process. Furthermore, we focused on the model inversion attack and varied the percentage of compromised clients from 10% to 50%. Our experimental findings demonstrate that LBAA-FedAVG outperforms Federated Averaging (FedAVG) in reducing the negative effects of backdoor adversarial attacks. The complexity analysis suggests that the extra training time is the only additional resource limitation in LBAA-FedAVG, which is 19% greater than that of FedAVG. Additionally, we conducted experiments on short-term load forecasting using grid-level datasets to show the effectiveness of LBAA-FedAVG in lightweight backdoor attack mitigation in FL settings, offering a trade-off between time efficiency and enhanced defense.
External IDs:dblp:conf/pimrc/ManzoorKSAZ23
Loading