Keywords: Fairness, Federated Learning, Post-processing
Abstract: Distributive fairness is a critical concern in the application of Federated Learning
(FL) to decision making. Three concepts of distributive fairness are recently con
sidered important in FL: global, local group and client fairness. Global fairness
addresses disparities among legally protected groups across the entire population.
Local group fairness addresses disparities between protected groups within indi
vidual clients. Client fairness focuses on disparities across clients. These concepts
of distributive fairness coexist in FL and achieving one does not guarantee the
others. Most FL studies focus on only a single concept. In real-world applications,
however, different stakeholders often require fairness from different perspectives
simultaneously. Enforcing those fairness concepts inherently incurs an accuracy
cost. This paper investigates that, for a given FL setup, the maximum achievable
accuracy under various combinations of distributive fairness, i.e., all three, any two,
or just one, depending on the application. We propose a post-processing algorithm
that returns a model with the near-optimal accuracy while satisfying pre-specified
fairness constraints. Experimental results show that our algorithm outperforms
the current state of the art (SOTA) in terms of the fairness–accuracy tradeoff,
computational and communication efficiency. Code is available on Github.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 14867
Loading