Breakwater: Securing Federated Learning from Malicious Model Poisoning via Self-Debiasing

Published: 01 Jan 2024, Last Modified: 17 Oct 2024ICC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep learning models deployed on edge devices leverage locally collected data to extract intelligence, mitigating privacy concerns associated with external data sharing. Edge federated learning, an on-device learning paradigm, has emerged as a promising solution, allowing edge nodes to train models locally and share only the trained weights, preserving data privacy. However, it also poses critical challenges of network burden and potential model poisoning. We introduce a self-debiasing security framework Breakwater for multi-hop edge federated learning. We incorporate on-device malicious weight discriminator at each participant, enhancing security and robustness of the federated learning process. The framework strategically balances the benefits of participating nodes with timely defenses against potential malicious clients. Based on the discriminator, we further embed a self-debiasing mechanism that can determine whether to retain or discard the weight propagation from its child nodes. Our Breakwater framework identifies and filters out harmful weights, ensuring the integrity of the global model. Our work contributes to the ongoing discourse on federated learning security, presenting a solution that maintains efficiency while robustly defending against model poisoning threats. We demonstrate its efficacy in enhancing the reliability of the multi-hop edge federated learning process with recovery of up to 69 % in accuracy under attack, offering a path toward secure and cooperative distributed learning environments.
Loading