Dynamic Aggregation for Heterogeneous Quantization in Federated LearningDownload PDFOpen Website

2021 (modified: 03 Nov 2022)IEEE Trans. Wirel. Commun. 2021Readers: Everyone
Abstract: Communication is widely known as the primary bottleneck of federated learning, and quantization of local model updates before uploading to the parameter server is an effective solution to reduce the communication overhead. However, prior literature always assumes homogeneous quantization for all clients, while in reality, devices are heterogeneous and support different levels of quantization precision. This heterogeneity of quantization poses a new challenge: fine-quantized model updates are more accurate than coarse-quantized ones, and how to optimally aggregate them at the server is an open problem. In this paper, we propose FedHQ – Federated Learning with Heterogeneous Quantization – that allocates different aggregation weights to different clients by minimizing the convergence rate upper bound as a function of the heterogeneous quantization errors of all clients, for both strongly convex and non-convex loss functions. To further accelerate the convergence, the instantaneous quantization error is computed and piggybacked when each client uploads the local model update, and the server dynamically calculates the weight accordingly for the current aggregation. Numerical experiment results demonstrate the performance advantages of FedHQ over both vanilla FedAvg with standard equal weights and a heuristic aggregation scheme, which assigns weights linearly proportional to the clients’ quantization precision.
0 Replies

Loading