Fairness of Federated Learning with Dynamic ParticipantsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Federated Learning, Dynamic Fairness, Benefit, Normalized SGD
Abstract: The concept of fairness has widely caught the attention of Federated Learning (FL). While there are tremendous studies about various notations of fairness in FL in recent years, all of them only consider the case where the training process starts and ends at the time point for all participants. Actually, participants could be dynamic and they may join and leave the training process at different time points. However, participants who join the training process at different time points receive similar incentive benefits can be seen as a signal of unfairness. In this paper, we provide the first study on such fairness of FL for dynamic participants. First, we propose a new mathematical definition of the above fairness namely $\textit{dynamic fairness}$. Briefly speaking, an algorithm is dynamically fair and satisfies that local agents who participate in the model training longer should receive more benefits than those who participate in the process shorter. Second, we develop a simple but novel method, which could be seen as a normalized version of $\textit{Fedavg}$, and theoretically show that it is fairer than $\textit{Fedavg}$. Moreover, we can combine our method with the previous methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive. Finally, empirically we propose a measure for $\textit{dynamic fairness}$ and demonstrate that our method can achieve a fairer performance under our definition of fairness through intensive experiments on three benchmark datasets.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
4 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview