Abstract: Secure aggregation is a privacy-aware protocol for model aggregation in federated learning. A major challenge of conventional secure aggregation protocols is their large communication overhead. Towards addressing this challenge, in this work we propose the first gradient sparsification framework for communication-efficient secure aggregation, which allows aggregation of sparsified local gradients from a large number of users, without revealing the individual local gradient parameters in the clear. We provide the theoretical performance guarantees of the proposed framework in terms of the communication efficiency, resilience to user dropouts, and model convergence. We further evaluate the performance of our framework through large-scale experiments in a distributed network with up to 100 users, and demonstrate a significant reduction in the communication over-head compared to conventional secure aggregation benchmarks.
Loading