LPP-FL: A Lightweight Privacy-Preserving Federated Learning Against Byzantine Attacks on Non-IID Data

Jiguo Yu, Hongliang Zhang, Qi Xia, Yifei Zou, Yangyang Liu

Published: 01 Jan 2025, Last Modified: 04 Nov 2025IEEE Transactions on Information Forensics and SecurityEveryoneRevisionsCC BY-SA 4.0
Abstract: As a distributed computing paradigm, federated learning (FL) enables multiple clients to cooperatively train in edge scenarios without sharing raw training data. Nonetheless, FL is vulnerable to Byzantine attacks due to its distributed nature. While numerous solutions have been proposed, they ignore the inconsistency of local models among clients caused by data heterogeneity (i.e., Non-IID), which severely degrades the performance of FL. Moreover, to further protect client privacy, complex security algorithms are integrated into FL, which seriously increases the privacy computation overhead on edge nodes. To tackle the above issues, this paper proposes a lightweight privacy-preserving federated learning framework, named LPP-FL, significantly improving the performance of FL against Byzantine attacks with Non-IID data. Specifically, we incorporate a correction-term into local model training to mitigate the inconsistency of local models among clients caused by data heterogeneity. Moreover, we design a secure protocol that is deployed on two servers, which achieves Byzantine-robust aggregation results while providing lightweight privacy protection for clients. Theoretical analysis demonstrates the security and robustness of LPP-FL. Extensive experiments show that LPP-FL exhibits superior performance against Byzantine attacks across various data distributions.
Loading