Towards Assessing Integrated Differential Privacy and Fairness Mechanisms in Supervised Learning

Published: 01 Jan 2024, Last Modified: 13 May 2025TPS-ISA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Ongoing intense research in and rapid adoption of machine learning (ML) have raised the critical need to study both the privacy and fairness issues in ML models together because of the potential harm from their use. Research has shown that ML models can leak or allow inference of privacy-sensitive information. Similarly, research has also focused on fairness issues related to ML systems that are used for decision-making tasks that rely on sensitive attributes such as gender and race. Recent research includes several proposed approaches to mitigate privacy and fairness issues in ML models. In this paper, we explore the trade-offs between using differential privacy (DP) and fairness mitigation mechanisms in ML, particularly supervised learning. We examine the existing ML fairness mitigation mechanisms and their ability to mitigate bias in privacy-preserving ML approaches. DP and fairness mechanisms can be employed in different stages within the ML process: pre-, in-/during, and post-processing. We systematically compare the performance and fairness scores for fairness mechanisms in supervised ML models and those scores in their privacy-preserving ML versions. Our findings show the best approaches to achieving a balance between fairness and privacy. We show that it is possible to maintain good accuracy using DP and in-processing fairness mechanisms. We discuss the best ways to avoid the impact of privacy loss budget, ϵ, when building a privacy- and fairness-preserving model.
Loading