Keywords: ML Reproducibility Challenge, Transitive Global Translations, Global Counterfactual Explanation
Abstract: [*] Scope of Reproducibility
In this paper we present an analysis and elaboration of (Plumb et al., 2020) , in which an algorithm is posed for the purpose of finding human-understandable explanations in terms of given explainable features of input data for differences between groups of points occurring in a lower-dimensional representation of that input data.
[*] Methodology
We have upgraded the original code provided by the author such that it is compatible with recent versions of popular deep learning frameworks, namely the TensorFlow 2.x- and PyTorch 1.7.x-libraries. Furthermore, we have created our own implementation of the algorithm in which we have incorporated additional experiments in order to evaluate the algorithm’s relevance in the scope of different dimensionality reduction techniques and differently structured data. We have performed the same experiments as described in the original paper using both the upgraded version of the code provided by the author and our own implementation taking the authors’ code and paper as references.
[*] Results
The results presented in the original paper were reproducible, both by using the provided code and our own implementation. Our additional experiments have highlighted several limitations of the explanatory algorithm in question: the algorithm severely relies on the shape and variance of the clusters present in the data (and, if applicable, the method used to label these clusters), and highly non-linear dimensionality reduction algorithms perform worse in terms of explainability.
[*] What was easy
The authors have provided an implementation that cleanly separates different experiments on different datasets and the core functional methodology. As a result of this separation, given a working environment, one could easily reproduce the experiments performed in the original paper.
[*] What was difficult
Minor difficulties were experienced in setting up the required environment for running the code provided by Plumb et al. locally (i.e. trivial changes in the code such as the usage of absolute paths and obtaining external dependencies). Evidently, it was time-consuming to rewrite all corresponding code, including the architecture for the variational auto-encoder provided by an external package, scvis 0.1.0.
[*] Communication with original authors
No communication with the original authors was required to reproduce their work
Paper Url: https://openreview.net/forum?id=MFj70_2-eY1
3 Replies
Loading