FedBiF: Communication-Efficient Federated Learning via Bits Freezing

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: federated learning, quantization, bits freezing
Abstract: Federated learning (FL) is a promising privacy-preserving distributed machine learning paradigm, however, involves significant communication overhead. The communication latency of iterative model transmissions between the central server and the clients seriously affects the training efficiency. Recently proposed algorithms quantize the model updates to reduce the FL communication costs. Yet, existing quantization methods only compress the model updates after local training, which introduces quantization errors to the model parameters and inevitably leads to a decrease in model accuracy. Therefore, we suggest restricting the model updates within lower quantization bitwidth during local training. To this end, we propose Federated Bits Freezing (FedBiF), a novel FL framework that enables clients to train only partial individual bits inside a parameter, termed activated bits, while freezing the others. In this way, the model updates are restricted to the representation of activated bits during local training. By alternately activating each bit in different FL rounds, FedBiF achieves extremely efficient communication, as only one activated bit is trained for each parameter and subsequently transmitted. Extensive experiments are conducted on three popular datasets with both IID and Non-IID settings. The experimental results not only validate the superiority of FedBiF in communication compression but also reveal some beneficial properties of FedBiF, including model sparsity and better generalization. In particular, FedBiF outperforms all the baseline methods, including FedAvg, by a large margin even with 1 bit per parameter (bpp) uplink and 4 bpp downlink communication.
Supplementary Material: zip
Primary Area: infrastructure, software libraries, hardware, etc.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6872
Loading