POP-FL: Towards Efficient Federated Learning on Edge Using Parallel Over-Parameterization

Published: 01 Jan 2024, Last Modified: 26 Jul 2025IEEE Trans. Serv. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) is a promising paradigm for mining massive data while respecting users’ privacy. However, the deployment of FL on resource-constrained edge devices remains elusive due to its high resource demand. In this paper, unlike existing works that use expensive dense models, we propose to utilize dynamic sparse training in FL and design a novel sparse-to-sparse FL framework, named as POP-FL. The framework can reduce both computation and communication overheads while maintaining the performance of the global model. Specifically, POP-FL partitions massive clients into groups and performs parallel parameter exploration, i.e., Parallel Over-Parameterization, over the collaboration between these groups. This exploration can greatly improve the expressibility and generalizability of sparse training in FL (especially for extreme sparsity levels) through reliably covering sufficient parameters and dynamically updating the global sparse network's structure during the training process. Experimental results show that compared with existing sparse-to-sparse training methods in both iid and non-iid data distribution, POP-FL achieves the best inference accuracy on various representative networks.
Loading