Making individually fair predictions with causal pathwaysDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 29 Sept 2023Data Min. Knowl. Discov. 2023Readers: Everyone
Abstract: Machine learning is being increasingly used to make algorithmic decisions that have strong societal impact on people’s lives. Due to their huge societal impact, such algorithmic decisions need to be accurate and fair with respect to sensitive features, including race, gender, religion, and sexual orientation. To achieve a good balance between prediction accuracy and fairness, causality-based methods have been proposed, which utilize a causal graph with unfair pathways. However, none of these methods can ensure fairness for each individual without making restrictive functional assumptions about the data generating processes, which are not satisfied in many cases. In this paper, we propose a far more practical causality-based framework for learning an individually fair classifier. To avoid impractical functional assumptions, we introduce a new criterion, the probability of individual unfairness, and derive its upper bound that can be estimated from data. We then train a classifier by solving an optimization problem where the upper bound value is forced to be close to zero. We elucidate why solving such an optimization problem can guarantee fairness for each individual. Moreover, we provide two extensions for dealing with challenging real-world scenarios where there are unobserved variables called latent confounders, and the true causal graph is uncertain. Experimental results show that our method can learn an individually fair classifier at a slight cost of prediction accuracy.
0 Replies

Loading