Abstract: Federated learning, as a solution to address the increasingly severe data isolation problem, holds great promise. However, it faces two significant security challenges: how to ensure the data privacy of participants and how to guarantee the correctness of the aggregation results. In addition, existing secure federated learning schemes have their limitations and drawbacks. Some schemes cannot handle participant dropouts, and others do not consider the privacy of the global model. Moreover, they all rely on a trusted authority, resulting in impracticality. To address these challenges, we introduce TSVFL, a two-server verifiable and privacy-preserving federated learning scheme. It tolerates participant dropouts during the training process and enables secure federated learning model training without needing a trusted authority. Comprehensive security analysis demonstrates that TSVFL effectively protects the data privacy of participants against various potential inference attacks and ensures training integrity. Furthermore, extensive experiments on real-world datasets confirm that TSVFL achieves lossless accuracy and practical performance.
Loading