FedZipper: A Layer-wise Quantization Compression Framework for Federated Learning with Statistical Heterogeneity
Abstract: Quantization is a common compression scheme for reducing traffic in federated learning that converts data from floating-point numbers to a low-precision representation. Existing quantization approaches typically apply fixed degrees of model parameter quantization on non-IID data at a coarse-grained level. It hurts the whole accuracy of the training model and fails to improve the efficiency of the federated learning process. In the paper, we propose a layer-wise quantization compression framework for federated learning with statistical heterogeneity, namely FedZipper. FedZipper can identify and quantify the critical layers of the learning model on each client. Moreover, FedZipper adopts an adaptive mechanism to aggregate different transmitted layers from different clients. Our extensive experiments show up to 38.8% improvement in the communication data volume for the whole training process while less decline in model accuracy.
Loading