FedEvalFair: A Privacy-Preserving and Statistically Grounded Federated Fairness Evaluation Framework
Abstract: Federated learning has rapidly gained attention in the industrial sector due to its significant advantages in protecting privacy. However, ensuring the fairness of federated learning models post-deployment presents a challenge in practical applications. Given that clients typically rely on limited private datasets to assess model fairness, this constrains their ability to make accurate judgments about the fairness of the model. To address this issue, we propose an innovative evaluation framework, FedEvalFair, which integrates private data from multiple clients to comprehensively assess the fairness of models in actual deployment without compromising data privacy. Firstly, FedEvalFair draws on the concept of federated learning to achieve a comprehensive assessment while protecting privacy. Secondly, based on the statistical concept of 'estimating the population from the sample', FedEvalFair is capable of estimating the fairness performance of the model in real-world settings from a limited data sample. Thirdly, we have designed a flexible two-stage evaluation strategy based on statistical hypothesis testing. We verified the theoretical performance and sensitivity to fairness variations of FedEvalFair using Monte Carlo simulations, demonstrating the superior performance of its two-stage evaluation strategy. Additionally, we validated the effectiveness of the FedEvalFair method on real-world datasets, including UCI Adult and eICU, and demonstrated its stability in dealing with real-world data distribution changes compared to traditional evaluation methods.
Primary Subject Area: [Systems] Systems and Middleware
Secondary Subject Area: [Systems] Systems and Middleware
Relevance To Conference: The current federated learning frameworks consider data privacy during model training, but existing methods for evaluating model fairness often assume full access to all data, overlooking privacy concerns. However, data privacy remains a critical challenge for fairness evaluation in practice. Moreover, assessing the fairness of federated learning models is crucial to avoid biases against sensitive groups during deployment.
We propose FedEvalFair, a framework that addresses these two key challenges. FedEvalFair enables accurate and efficient fairness evaluation of federated learning models under privacy preservation by leveraging statistical inference and hypothesis testing on aggregated private data from multiple clients. It not only protects client data privacy but also enhances the accuracy of fairness assessment, providing a strong assurance for the safe deployment of federated models. FedEvalFair offers an innovative solution for fairness evaluation in multimedia/multimodal data processing scenarios where privacy is a prime concern.
Supplementary Material: zip
Submission Number: 4574
Loading