Personalized Federated Learning with Communication Compression

Published: 23 Nov 2023, Last Modified: 23 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In contrast to training traditional machine learning~(ML) models in data centers, federated learning~(FL) trains ML models over local datasets contained on resource-constrained heterogeneous edge devices. Existing FL algorithms aim to learn a single global model for all participating devices, which may not be helpful to all devices participating in the training due to the heterogeneity of the data across the devices. Recently, Hanzely and Richt\'{a}rik (2020) proposed a new formulation for training personalized FL models aimed at balancing the trade-off between the traditional global model and the local models that could be trained by individual devices using their private data only. They derived a new algorithm, called {\em loopless gradient descent}~(L2GD), to solve it and showed that this algorithms leads to improved communication complexity guarantees in regimes when more personalization is required. In this paper, we equip their L2GD algorithm with a {\em bidirectional} compression mechanism to further reduce the communication bottleneck between the local devices and the server. Unlike other compression-based algorithms used in the FL-setting, our compressed L2GD algorithm operates on a probabilistic communication protocol, where communication does not happen on a fixed schedule. Moreover, our compressed L2GD algorithm maintains a similar convergence rate as vanilla SGD without compression. To empirically validate the efficiency of our algorithm, we perform diverse numerical experiments on both convex and non-convex problems and using various compression techniques.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: All changes are made in **blue color.**
Supplementary Material: pdf
Assigned Action Editor: ~Thang_D_Bui1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1137
Loading