Reproducibility study of "Joint Multisided Exposure Fairness for Recommendation"Download PDF

Published: 02 Aug 2023, Last Modified: 02 Aug 2023MLRC 2022Readers: Everyone
Keywords: Reproducibility, Information Retrieval, Fairness
TL;DR: Reproducibility study of Joint Multisided Exposure Fairness (JME) for Recommendation.
Abstract: Scope of Reproducibility In this work, we study the reproducibility of Joint Multisided Exposure Fairness (JME) for Recommendation: a recent paper on fairness in ranking algorithms by Wu et al. We aim to verify the following claims suggested by the paper: (i) each of the six proposed exposure fairness metrics quantifies a different notion of unfairness, (ii) for each of the proposed metrics there exists a disparity‐relevance trade‐ off, and (iii) recommender systems can be optimized toward different fairness goals by considering different combinations of the JME-fairness metrics. Methodology We modify and extend upon the open‐source implementation of the pipeline, published by the authors on GitHub. Our adjustments include restructuring the code base, adding experimental setup files, and removing several bugs. We run the experiments on a RTX 3070 GPU, at a reproducibility cost of 44.5 GPU hours. Results We successfully reproduce the major trends of the core results, although some numerical deviations occur. In particular, we are able of providing support to two out of three claims. However, due to insufficient documentation and resources, we were unable to verify the third claim of the paper. We concurringly conclude that in order to determine the fairness of a recommender system, considering different fairness dimensions with a multi‐stakeholder perspective is essential. What was easy The JME‐fairness metrics proposed in the paper are well‐explained and fairly intuitive. Even without a background in fairness in AI and recommender systems, we were able to follow the pipeline and the main ideas presented. What was difficult Details regarding the setup of the experiments are missing from the original codebase, and documentation is limited. In addition, for the reproduction of their third claim, familiarity with topics not analyzed in the paper is required. Communication with original authors Per request by email, the authors provided some clarifications regarding experimental setups and the calculations performed in the experiments of the original paper. We received a response that answered part of our questions and a reference to a GitHub repository that is potentially suitable for demonstrating optimization with a JME‐fairness loss.
Paper Url: https://arxiv.org/abs/2205.00048
Paper Venue: Other venue (not in list)
Venue Name: ACM SIGIR 2022
Confirmation: The report pdf is generated from the provided camera ready Google Colab script, The report metadata is verified from the camera ready Google Colab script, The report contains correct author information., The report contains link to code and SWH metadata., The report follows the ReScience latex style guides as in the Reproducibility Report Template (https://paperswithcode.com/rc2022/registration)., The report contains the Reproducibility Summary in the first page., The latex .zip file is verified from the camera ready Google Colab script
Latex: zip
Journal: ReScience Volume 9 Issue 2 Article 20
Doi: https://www.doi.org/10.5281/zenodo.8173698
Code: https://archive.softwareheritage.org/swh:1:dir:ddaee99fffaa5becad67496efe000ca6c47341c7
0 Replies

Loading