FairPFN: Transformers Can Do Counterfactual Fairness

Published: 12 Jul 2024, Last Modified: 09 Aug 2024AutoML 2024 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fairness, Causal ML, Prior-Fitted Networks, In-Context Learning
Abstract: Machine Learning systems are increasingly prevalent across healthcare, law enforcement, and finance, but often operate on historical data, which may carry biases against certain demographic groups. Causal and counterfactual fairness provides an intuitive way to define fairness that aligns closely with legal standards, capturing the intuition that a decision is fair to an individual if it remains unchanged in a hypothetical scenario where the individual is part of another demographic group. Despite the theoretical benefits of counterfactual fairness, it comes with several practical limitations, largely related to the over-reliance on domain knowledge and approximate causal discovery techniques in constructing a causal model. In this study, we take a fresh perspective on achieving counterfactual fairness, building upon recent work in in-context-learning (ICL) and prior-fitted networks (PFNs) to learn a transformer called FairPFN. This model is trained using synthetic fairness data to eliminate the causal effects of protected attributes directly from observational data. In our experiments, we thoroughly assess the effectiveness of FairPFN in eliminating the causal impact of protected attributes. Our findings pave the way for a new and promising research area: transformers for causal and counterfactual fairness.
Submission Checklist: No
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Optional Meta-Data For Green-AutoML: All questions below on environmental impact are optional.
Submission Number: 16
Loading