Enhancing Group Fairness in Federated Learning through Personalization

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: fairness, personalization, federated learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Instead of producing a single global model for all participating clients, personalized Federated Learning (FL) algorithms aim to collaboratively train customized models for each client, enhancing their local accuracy. For example, clients could be clustered into different groups in which their models are similar, or clients could tune the global model locally to achieve better local accuracy. In this paper, we investigate the impact of personalization techniques in the FL paradigm on local (group) fairness of the learned models, and show that personalization techniques can also lead to improved fairness. We establish this effect through numerical experiments comparing two types of personalized FL algorithms against the baseline FedAvg algorithm and a baseline fair FL algorithm, and elaborate on the reasons behind improved fairness using personalized FL methods. We further provide analytical support under certain conditions.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8065
Loading