Fair Federated Learning via the Proportional Veto Core

Published: 02 May 2024, Last Modified: 25 Jun 2024ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Previous work on fairness in federated learning introduced the notion of *core stability*, which provides utility-based fairness guarantees to any subset of participating agents. However, these guarantees require strong assumptions on agent utilities that render them impractical. To address this shortcoming, we measure the quality of output models in terms of their ordinal *rank* instead of their cardinal utility, and use this insight to adapt the classical notion of *proportional veto core (PVC)* from social choice theory to the federated learning setting. We prove that models that are *PVC-stable* exist in very general learning paradigms, even allowing non-convex model sets, as well as non-convex and non-concave loss functions. We also design Rank-Core-Fed, a distributed federated learning algorithm, to train a PVC-stable model. Finally, we demonstrate that Rank-Core-Fed outperforms baselines in terms of fairness on different datasets.
Submission Number: 7700
Loading