Towards layer-wise quantization for heterogeneous federated clients

Published: 01 Jan 2025, Last Modified: 23 Jul 2025Comput. Networks 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) has arisen to train deep learning models on massive private data, which are produced and possessed by geographically dispersed clients at the network edge. However, in edge computing scenarios, FL usually suffers from the constrained and heterogeneous communication resource. To achieve communication-efficient FL, we concentrate on the technique of model quantization. The existing researches in FL mainly perform model quantization at the grain of the entire model. However, according to our empirical analysis, when quantizing each layer of a model with the same quantization level, the amount of saved memory differs significantly across layers. Besides, the model exhibits different decreases in test accuracy when each layer is separately quantized to the same degree. To this end, we propose a more efficient and flexible Layer-wise Quantization scheme for FL, termed FedLQ. We further theoretically analyze the relationship between the convergence bound and the quantization level. Furthermore, considering that the quantization of each layer will yield different effects on the communication cost and model accuracy, we develop a joint metric (i.e., layer significance) to evaluate the comprehensive influence of layer-wise quantization on model training, and design a significance-aware algorithm to determine adaptive layer-wise quantization levels for different clients. Extensive experiments in simulation environment illustrate that FedLQ is able to effectively reduce communication consumption while still achieving promising accuracy even with low-bit quantization. Compared to the baselines, FedLQ can achieve up to 5.77× speedup when reaching the target accuracy, or obtain at most 27% improvement in test accuracy under low-bits quantization scenarios.
Loading