QPFFL: Advancing Federated Learning with Quantum-Resistance, Privacy, and Fairness

Published: 01 Jan 2024, Last Modified: 20 May 2025GLOBECOM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) has gained prominence for collaborative training across multiple devices without data sharing. However, traditional FL overlooks two crucial aspects: collaborative fairness and privacy protection. Typically, all participants receive the same models, regardless of their contribution, and plaintext transmission of model gradients risks privacy. Existing fairness-enhancing approaches often increase privacy risks, while security-focused methods suffer from efficiency limitations, failing to provide a comprehensive solution against multiple threats simultaneously. To address these challenges, we propose QPFFL, a novel fair and secure FL framework. Firstly, we propose Privacy-Preserving Reputation Mechanism (PPRM) that assigns global models to users based on their performance during training, promoting fairness of FL. We employ Functional Encryption (FE) to enable efficient and quantum-resistant aggregation, securing user model parameters. Furthermore, a reputation threshold helps identify malicious behaviors. Theoretical analysis and experiments demonstrate QPFFL’s effectiveness in thwarting various attacks without compromising privacy and efficiency, thereby providing a comprehensive solution for secure and fair FL.
Loading