An Efficient and Secure Privacy-Preserving Federated Learning Via Lattice-Based Functional Encryption
Abstract: In recent times, federated learning (FL) aggregation techniques based on functional encryption (FE) have garnered increased attention. The growing interest stems from the distinct advantages of FE compared to traditional aggregation methods. Especially in terms of computational efficiency, communication costs and functionality, FE markedly surpasses its counterparts. However, privacy-preserving federated learning (PPFL) schemes utilizing FE still grapple with significant privacy and security challenges. For instance, current implementations fail to safe-guard aggregated intermediate outcomes and remain susceptible to quantum attacks, among other concerns. To address these problems, we first propose PIM-MCFE, a new FE scheme based on Learning with Errors (LWE) assumption, which can hide the intermediate aggregated results and is computationally efficient. We extend the scheme to the aggregation task of PPFL and propose an optimization technique, plaintext packaging to accelerate the training process. We provide the security analysis of the proposed PPFL scheme through theoretical analysis and demonstrate its efficiency and practicality through extensive experiments. The results show the encryption efficiency of our scheme improves by 20× and 50× compared to HybridAlpha and CryptoFE, and the decryption operation achieves a 3-orders-of-maanitude efficiency improvement.
Loading