Communication-Efficient Sparse Federated Learning on Non-IID Datasets

TMLR Paper5911 Authors

17 Sept 2025 (modified: 16 Oct 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are trained and communicated each round between the clients and the server. On standard benchmarks including CIFAR-10, CIFAR-100, and Tiny-ImageNet, SSFL consistently improves the accuracy–sparsity trade-off, achieving more than 20\% relative error reduction on CIFAR-10 compared to the strongest sparse baseline, while reducing communication costs by $2 \times$ relative to dense FL. Finally, in a real-world federated learning deployment, SSFL delivers over $2.3 \times$ faster communication time, underscoring its practical efficiency.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Ali_Ramezani-Kebrya1
Submission Number: 5911
Loading