Abstract: Federated learning (FL) is a promising technology via which some edge devices/clients
collaboratively train a machine learning model orchestrated by a server. Learning an unfair
model is known as a critical problem in federated learning, where the trained model may
unfairly advantage or disadvantage some of the devices. To tackle this problem, in this work,
we propose AdaFed. The goal of AdaFed is to find an updating direction for the server along
which (i) all the clients’ loss functions are decreasing; and (ii) more importantly, the loss
functions for the clients with larger values decrease with a higher rate. AdaFed adaptively
tunes this common direction based on the values of local gradients and loss functions. We
validate the effectiveness of AdaFed on a suite of federated datasets, and demonstrate that
AdaFed outperforms state-of-the-art fair FL methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have incorporated the concerns raised by the three reviewers; the **major** changes are as follows
1. Introduced subsection 6.2 to clarify the distinctions between AdaFed and FedAdam.
2. Included experiments with a greater number of local epochs.
3. Expanded Section 7.5.
4. Added some discussions on fairness in Federated Learning in Appendix C.
5. Added experiments using a real-world dataset in Appendix I.
Assigned Action Editor: ~Naman_Agarwal1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1410
Loading