[Re] Reproducibility Study of Equal Improvability Fairness Notion

TMLR Paper2260 Authors

17 Feb 2024 (modified: 01 Mar 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: Our research validates and expands the Equal Improvability (EI) framework, which aims to equalize acceptance rates across different groups by quantifying required improvement efforts, thereby enhancing long-term fairness. By replicating the original findings, we reaffirm EI's foundational claims. Additionally, extended experiments are conducted to probe EI's efficacy under varied scenarios. To enhance long-term fairness, we propose non-parametric updates and Chi-square fit to generalize the dataset, in contrast to the Gaussian distribution dataset from the original study. Our analysis shows that the EI framework struggles with adapting to the Chi-square fit and exhibits even poorer performance with non-parametric updates in long-term scenarios, indicating challenges in dynamic distribution scenarios. The update rule is modified to align more with theorem and intuition. It is proved that EI is more robust to noise compared with the other notions. The examination of varying decision fractions uncovers the conditional robustness of EI across different acceptance rates. These experiments and highlight the strengths of EI in certain contexts and its limitations in others, providing a nuanced understanding of its applicability and areas for improvement in the pursuit of fairness in machine learning.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Andrew_Miller1
Submission Number: 2260
Loading