Abstract: It is well understood that client-master communication can be a primary bottleneck in federated learning (FL). In this work, we address this issue with a novel client subsampling scheme, where we restrict the number of clients allowed to communicate their updates back to the master node. In each communication round, all participating clients compute their updates, but only the ones with "important" updates communicate back to the master. We show that importance can be measured using only the norm of the update and give a formula for optimal client participation. This formula minimizes the distance between the full update, where all clients participate, and our limited update, where the number of participating clients is restricted. In addition, we provide a simple algorithm that approximates the optimal formula for client participation, which allows for secure aggregation and stateless clients, and thus does not compromise client privacy. We show both theoretically and empirically that for Distributed SGD (DSGD) and Federated Averaging (FedAvg), the performance of our approach can be close to full participation and superior to the baseline where participating clients are sampled uniformly. Moreover, our approach is orthogonal to and compatible with existing methods for reducing communication overhead, such as local methods and communication compression methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Major changes in the camera-ready version:
1. We have included an additional experiment on the balanced CIFAR100 dataset in Appendix G.
1. We have run the experiment on the Shakespeare dataset with more different values of m and n, and updated the corresponding figures.
Video: https://youtu.be/lhLJL1FJ_OE
Code: https://github.com/SamuelHorvath/FL-optimal-client-sampling
Assigned Action Editor: ~Aurélien_Bellet1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 123
Loading