Quantization Bits Allocation for Wireless Federated Learning

Published: 01 Jan 2023, Last Modified: 08 Apr 2025IEEE Trans. Wirel. Commun. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) enables multiple clients to collaborate on a common learning task via only exchanging model updates. With the progressive improvements in deep learning models, communication is becoming a primary bottleneck of FL. Quantization of model updates before transmitting is an effective technique to reduce communication overhead. Most prior literature assumes lossless transmission, but in practice, quantized model updates are distorted by wireless channels due to the variation of client locations. Therefore, this paper focuses on analysis and design of personalized model update quantization with explicitly incorporating channel diversity in wireless FL. We present a novel convergence analysis of quantized FL, which encompasses full and partial client participation, single and multiple local training iterations, and convex and non-convex loss functions. This analysis explicitly embodies the impact of personalized quantization error, channel diversity and model aggregation in FL, and also elucidates their tradeoff on tightening a convergence rate upper bound. An optimization framework, which seeks an optimal allocation scheme given a total budget of quantization bits, is proposed by minimizing an upper bound with respect to channel quality. A nearly optimal solution is derived for this non-convex integer programming problem via analytically solving Karush–Kuhn–Tucker (KKT) optimality conditions and linear search. From a perspective of outlier detection, this channel-aware allocation scheme is also extended to robust model aggregation against client dropouts. Comprehensive numerical evaluation demonstrates the performance enhancement of the proposed scheme over the vanilla allocation scheme with equal quantization bits, particularly in terms of training stability, test accuracy, and robustness.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview