FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models

TMLR Paper4900 Authors

20 May 2025 (modified: 30 May 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server, while being respectful of privacy. A critical bottleneck in FL is the communication cost. A pivotal strategy to mitigate this burden is Local Training, which involves running multiple local stochastic gradient descent iterations between communication phases. Our work is inspired by the innovative Scaffnew algorithm, which has considerably advanced the reduction of communication complexity in FL. We introduce FedComLoc (Federated Compressed and Local Training), integrating practical and effective compression into Scaffnew to further enhance communication efficiency. Extensive experiments, using the popular Top-K compressor and quantization, demonstrate its prowess in substantially reducing communication overheads in heterogeneous settings.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Tian_Li1
Submission Number: 4900
Loading