Keywords: reproducibility, XAI, causality, post-hoc, explanations, VAE, CNN, blackbox
Abstract: Scope of Reproducibility
The paper by O’Shaughnessy et al. (2020) claims to have developed a method to disentangle the latent space of
generative models during training. The latent space then consists of variables with causal influence and variables with
non-causal influence. These can then be used as explanations of the generative model. These models will be reproduced
with the goal of examining their latent space and confirming if they serve as sufficiently reliable explanations.
Methodology
The GitHub of the paper contains a detailed README explaining how to reproduce the different figures. These steps
were followed in order to reproduce the results. Additionally, an extension has been made by applying the method on a
more complex dataset, namely ImageNet.
Results
Generally speaking, the results in the paper are reproducible. The accuracy, however, when running Experiment 3
(38%) is much lower than in the paper. This is because we divided the amount of Monte-Carlo samples by 5. The
difference between α and β latent factors remains the same, even though the accuracy is much lower for α1 and α2.
The results of the extension experiments did not show the same properties as in the paper. This, however, might be
caused by factors other than the generalisability of the method in the paper.
What was easy
The paper is clear and explains its concepts well. Also the provided code base and README file make it easy to
reproduce the results of the paper.
What was difficult
The GitHub of the paper is still being updated. Therefore one version might create similar results to the paper while an
other one seems to result in errors. Also the code itself is not very well documented, which makes it difficult to solve
the problem at such a low level.
Communication with original authors
We have asked the authors for advice on our extension and they have provided useful information on how to approach
our extension and why it has value beyond the original paper.
https://github.com/UvAartificialintelligence/Fairness-Accountability-Confidentiality-and-Transparency-in-AI
Paper Url: https://openreview.net/forum?id=tdG6Fa3Y6hq
3 Replies
Loading