Adaptive Sparse Federated Learning in Large Output Spaces via HashingDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023FL-NeurIPS 2022 PosterReaders: Everyone
Keywords: hashing, federated learning
TL;DR: We propose a sparse federated learning scheme for efficient on-device training.
Abstract: This paper focuses on the on-device training efficiency of federated learning (FL), and demonstrates it is feasible to exploit sparsity in the client to save both computation and memory for deep neural networks with large output space. To this end, we propose a sparse FL scheme using hash-based adaptive sampling algorithm. In this scheme, the server maintains neurons in hash tables. Each client looks up a subset of neurons from the hash table in the server and performs training. With the locality-sensitive hash functions, this scheme could provide valuable negative class neurons with respect to the client data. Moreover, the cheap operations in hashing incur low computation overhead in the sampling. In our empirical evaluation, we show that our approach can save up to $70\%$ on-device computation and memory during FL while maintaining the same accuracy. Moreover, we demonstrate that we could use the savings in the output layer to increase the model capacity and obtain better accuracy with a fixed hardware budget.
Is Student: Yes
4 Replies

Loading