Fairness and robustness in anti-causal predictionDownload PDF

Published: 21 Jul 2022, Last Modified: 20 Oct 2024SCIS 2022 PosterReaders: Everyone
Keywords: fairness, robustness, causality
TL;DR: Exploring the connections between fairness and risk invariance through a causal lens inform fairness-robustness tradeoffs and uncover efficient methods for enforcing fairness
Abstract: Robustness to distribution shift and fairness have independently emerged as two important desiderata required of modern machine learning models. Here, we discuss these connections through a causal lens, focusing on anti-causal prediction tasks, where the input to a classifier (e.g., an image) is assumed to be generated as a function of the target label and the protected attribute. By taking this perspective, we draw explicit connections between a common fairness criterion---separation---and a common notion of robustness---risk invariance. These connections provide new motivation for applying the separation criterion in anticausal settings, and show that fairness can be motivated entirely on the basis of achieving better performance. In addition, our findings suggest that robustness-motivated approaches can be used to enforce separation, and that they often work better in practice than methods designed to directly enforce separation. Using a medical dataset, we empirically validate our findings on the task of detecting pneumonia from X-rays, in a setting where differences in prevalence across sex groups motivates a fairness mitigation. Our findings highlight the importance of considering causal structure when choosing and enforcing fairness criteria.
Confirmation: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/fairness-and-robustness-in-anti-causal/code)
0 Replies

Loading