ESFL: Accelerating Poisonous Model Detection in Privacy-Preserving Federated Learning

Honghong Zeng, Jiong Lou, Kailai Li, Chentao Wu, Guangtao Xue, Yuan Luo, Fan Cheng, Wei Zhao, Jie Li

Published: 01 Jan 2025, Last Modified: 08 Nov 2025IEEE Transactions on Dependable and Secure ComputingEveryoneRevisionsCC BY-SA 4.0
Abstract: Privacy-preserving federated learning (PPFL) is a promising secure distributed learning paradigm, which enables collaborative training of a global machine learning model through sharing encrypted local models instead of sensitive raw data. PPFL, however, is vulnerable to model poisoning attacks. Most existing Byzantine-robust PPFL solutions typically employ two non-colluding servers to achieve secure model detection and aggregation by executing interactive security protocols, which incur considerable computation and communication overheads. To tackle this issue, we propose an efficient and secure federated learning (ESFL) technique to accelerate the detection of poisonous models in PPFL. First, to improve computational efficiency, we construct a lightweight non-interactive efficient decryption functional encryption (NED-FE) scheme to protect the data privacy of local models. Then, to ensure high communication performance, we elaborately design a non-interactive privacy-preserving robust aggregation strategy, which efficiently detects the blind poisonous models and aggregates benign models. Finally, we implement ESFL and conduct extensive theoretical analysis and experiments. The numerical results demonstrate that ESFL not only achieves the confidentiality and robustness design goals but also maintains high efficiency. Compared with the baseline, ESFL effectively reduces the aggregation latency by up to 88%.
Loading