Federated Progressive Sparsification (Purge-Merge-Tune)+Download PDF

Published: 21 Oct 2022, Last Modified: 03 Nov 2024FL-NeurIPS 2022 PosterReaders: Everyone
Keywords: Federated Learning, Sparsification, Model Pruning
TL;DR: Federated training using progressive sparsification outperforms other pruning strategies, achieving learning performance comparable to no-pruning.
Abstract: We present FedSparsify, a sparsification strategy for federated training based on progressive weight magnitude pruning, which provides several benefits. First, since the size of the network becomes increasingly smaller, computation and communication costs during training are reduced. Second, the models are incrementally constrained to a smaller set of parameters, which facilitates alignment/merging of the local models, and results in improved learning performance at high sparsity. Third, the final sparsified model is significantly smaller, which improves inference efficiency. We analyze FedSparsify's convergence and empirically demonstrate that FedSparsify can learn a subnetwork smaller than a tenth of the size of the original model with the same or better accuracy compared to existing pruning and no-pruning baselines across several challenging federated learning environments. Our approach leads to an average 4-fold inference efficiency speedup and a 15-fold model size reduction over different domains and neural network architectures.
Is Student: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/federated-progressive-sparsification/code)
4 Replies

Loading