Reproducibility Study of ’Exacerbating Algorithmic Bias through Fairness Attacks’Download PDF

Published: 11 Apr 2022, Last Modified: 05 May 2023RC2021Readers: Everyone
Abstract: Reproducibility Summary Scope of Reproducibility The goal of this paper is to assess the reproducibility of experiments and results in the paper 'Exacerbating Algorithmic Bias through Fairness Attacks' by \citet{mehrabi2020exacerbating}, from which the following claims are evaluated: - Claim 1: The anchoring attacks reduce the fairness of an ML model trained on the three data sets German Credit, COMPAS and Drug consumption. - Claim 2: The influence attack reduces the fairness of an ML model trained on the three data sets German Credit, COMPAS and Drug consumption. Methodology We used the code the authors published alongside their paper as a resource to understand the methodology of their experiments, which was only briefly touched upon in the original paper. Our contribution is to extrapolate the original method using the provided code and to use this to recreate the experiments, successfully obtaining similar results as the paper and supporting their claims. Results Our results followed similar patterns as those of the authors, which backs up their claims regarding the attacks. However, our results did slightly deviate from their results, meaning the original paper has some reproducibility issues in the context of our experimental setup. What was easy and what was difficult It was difficult to understand the experiments from the paper. In our specific setting it was not possible to obtain similar results following only the methodology of their paper. Recreating the data sets required several assumptions. Reorganizing the code was a challenge in and of itself, owing to a lack of documentation within the original code. Communication with original authors We had no direct contact with the authors. However, other research teams working on reproducing the same work provided us with a digital environment file supplied to them by the authors.
Paper Url: https://arxiv.org/pdf/2012.08723.pdf
Paper Venue: AAAI 2021
5 Replies

Loading