CoCoFL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization

Published: 28 Jun 2023, Last Modified: 28 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Devices participating in federated learning (FL) typically have heterogeneous communication, computation, and memory resources. However, in synchronous FL, all devices need to finish training by the same deadline dictated by the server. Our results show that training a smaller subset of the neural network (NN) at constrained devices, i.e., dropping neurons/filters as proposed by state of the art, is inefficient, preventing these devices to make an effective contribution to the model. This causes unfairness w.r.t the achievable accuracies of constrained devices, especially in cases with a skewed distribution of class labels across devices. We present a novel FL technique, CoCoFL, which maintains the full NN structure on all devices. To adapt to the devices’ heterogeneous resources, CoCoFL freezes and quantizes selected layers, reducing communication, computation, and memory requirements, whereas other layers are still trained in full precision, enabling to reach a high accuracy. Thereby, CoCoFL efficiently utilizes the available resources on devices and allows constrained devices to make a significant contribution to the FL system, preserving fairness among participants (accuracy parity) and significantly improving final accuracy.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/k1l1/CoCoFL
Supplementary Material: zip
Assigned Action Editor: ~Stephan_M_Mandt1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 888
Loading