Abstract: Cross-silo federated learning (FL) allows organizations to collaboratively train machine learning (ML) models by sending their local gradients to a server for aggregation, without having to disclose their data. The main security issues in FL, that is, the privacy of the gradient and the trained model, and the correctness verification of the aggregated gradient, are gaining increasing attention from industry and academia. A popular approach to protect the privacy of the gradient and the trained model is for each client to mask their own gradients using additively homomorphic encryption (HE). However, this leads to significant computation and communication overheads. On the other hand, to verify the aggregated gradient, several verifiable FL protocols that require the server to provide a verifiable aggregated gradient were proposed. However, these verifiable FL protocols perform poorly in computation and communication. In this paper, we propose SVFL, an efficient protocol for cross-silo FL, that supports both secure gradient aggregation and verification. We first replace the heavy HE operations with a simple masking technique. Then, we design an efficient verification mechanism that achieves the correctness verification of the aggregated gradient. We evaluate the performance of SVFL and show, by complexity analysis and experimental evaluations, that its computation and communication overheads remain low even on large datasets, with a negligible accuracy loss (less than $1\%$ ). Furthermore, we conduct experimental comparisons between SVFL and other existing FL protocols to show that SVFL achieves significant efficiency improvements in both computation and communication.
Loading