Abstract: Federated learning (FL) is a new distributed learning paradigm, in which the clients cooperate to conduct the global model without exposing local private data. However, existing privacy inference attacks on FL show that adversaries can still reverse the training data from the submitted model updates. Recently, secure aggregation has been proposed and integrated into the FL framework, which effectively guarantees privacy through various cryptographic techniques, unfortunately at the cost of a large amount of communication and computation. In this paper, we propose a highly efficient secure aggregation scheme, Fast-Aggregate, which significantly reduces the communication and computation overhead while ensuring data privacy and robustness against clients' dropout. Firstly, Fast-Aggregate employs a multi-group regular graph for efficient secure aggregation to boost data parallelism. Secondly, we leverage polynomial multi-point evaluation and fast Lagrange interpolation methods to handle clients' dropout as well as reduce computational complexity. Finally, we adopt an additive mask to guarantee clients' privacy. Riding on the capabilities of Fast-Aggregate, we achieve the secure aggregation overhead of O (N log <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$N$</tex> ), as opposed to O (N <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ) in the state-of-the-art works. Besides, Fast-Aggregate improves training speed without loss of model quality and provides flexibility to deal with client corruption at the same time.
0 Replies
Loading