Lightweight Secure Aggregation for Personalized Federated Learning with Backdoor Resistance

Published: 01 Jan 2024, Last Modified: 12 Jun 2025ACSAC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Existing federated learning (FL) systems are highly vulnerable in terms of security and privacy due to their distributed architecture, facing poisoning attacks and inference attacks from adversaries. Some prior works have combined poisoning defenses with cryptographic tools: Secure Multi-Party Computation, Zero-Knowledge Proof, and Homomorphic Encryption to propose robust secure aggregation methods that provide security and privacy preservation for FL. Recently, Qin et al. (KDD’23) demonstrate that personalized federated learning (pFL) can effectively resist backdoor injection in poisoning attacks. In this paper, we analyze that as the number of malicious attackers increases, pFL remains vulnerable to backdoor attacks. Moreover, we reveal that current robust secure aggregation methods fail to offer efficient and robust backdoor defense for pFL. Therefore, we propose FLIGHT, a robust secure aggregation method for pFL. It implements a lightweight backdoor detection through a two-stage personalized defense mechanism and ensures privacy preservation using communication-efficient two-party secure computation (2PC) protocols. Extensive experiments on diverse datasets and neural networks validate that FLIGHT decreases run-time up to 64× compared by prior work RoFL (S&P’23), and 42× compared to FLAME (USENIX Security’22).
Loading