Keywords: Algorithmic Fairness, Federated Learning, Bias
TL;DR: We introduce a library to generate tabular datasets and release fixed datasets specifically designed to evaluate fair FL methods, encompassing diverse client-level scenarios with respect to bias in sensitive attributes.
Abstract: Federated Learning (FL) enables collaborative model training across multiple clients without sharing clients' private data. However, the diverse and often conflicting biases present across clients pose significant challenges to model fairness.
Current fairness-enhancing FL solutions often fall short, as they typically mitigate biases for a single, usually binary, sensitive attribute, while ignoring the heterogeneous fairness needs that exist in real-world settings.
Moreover, these solutions often evaluate unfairness reduction only on the server side, hiding persistent unfairness at the individual client level.
To support more robust and reproducible fairness research in FL, we introduce a comprehensive benchmarking framework for fairness-aware FL at both the global and client levels. Our contributions are three-fold: (1) We introduce \fairdataset, a library to create tabular datasets tailored to evaluating fair FL methods under heterogeneous client bias; (2) we release four bias-heterogeneous datasets and corresponding benchmarks to compare fairness mitigation methods in a controlled environment; (3) we provide ready-to-use functions for evaluating fairness outcomes for these datasets.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17477
Loading