Reproducibility Study: Equal Improvability: A New Fairness Notion Considering the Long-Term Impact

Published: 12 Jul 2024, Last Modified: 12 Jul 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: This reproducibility study aims to evaluate the robustness of Equal Improvability (EI) - an effort-based framework for ensuring long-term fairness. To this end, we seek to analyze the three proposed EI-ensuring regularization techniques, i.e. Covariance-based, KDE-based, and Loss-based EI. Our findings largely substantiate the initial assertions, demonstrating EI’s enhanced performance over Empirical Risk Minimization (ERM) techniques on various test datasets. Furthermore, while affirming the long-term effectiveness in fairness, the study also uncovers challenges in resilience to overfitting, particularly in highly complex models. Building upon the original study, the experiments were extended to include a new dataset and multiple sensitive attributes. These additional tests further demonstrated the effec- tiveness of the EI approach, reinforcing its continued success. Our study highlights the importance of adaptable strategies in AI fairness, contributing to the ongoing discourse in this field of research.
Certifications: Reproducibility Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Added an acknowledgements section (last section) and deanonymized the paper
Code: https://github.com/JakubTomaszewski/ei_fairness_reproducibility
Assigned Action Editor: ~Kangwook_Lee1
Submission Number: 2241
Loading