Abstract: Secure aggregation is an information-theoretic mechanism for gradient aggregation in federated learning, to aggregate the local user gradients without revealing them in the clear. In this work, we study secure aggregation under gradient sparsification constraints, for resource-limited wireless networks, where only a small fraction of local parameters are aggregated from each user during training (as opposed to the full gradient). We first identify the vulnerabilities of conventional secure aggregation mechanisms under gradient sparsification. We show that conventional mechanisms can reveal sensitive user data when aggregating sparsified gradients, due to the auxiliary coordinate information shared during sparsification, even when the individual gradients are not disclosed in the clear. We then propose TinySecAgg, a novel coordinate-hiding sparsified secure aggregation mechanism to address this challenge, under formal information-theoretic privacy guarantees. Our framework reduces the communication overhead of conventional secure aggregation baselines by an order of magnitude (up to $22.5\times $ ) without compromising model accuracy.
Loading