Keywords: variational autoencoder, latent space, disentanglement, transparency, generative, VAE, Grad-CAM, visualization, explainability
Abstract: Using a modification of Grad-CAM, attention maps can be created for Variational Autoencoders, resulting in explainable generations. Using these attention maps, state-of-the-art anomaly detection and latent space disentanglement is reached.
We started the challenge using the author's code, but this only covered one experiment of the paper, namely anomaly detection for the MNIST dataset. Therefore we ourselves added models, training and testing code for all other anomaly detection experiments, those on the UCSD-Ped1 and MVTec dataset, and also for the latent space disentanglement experiments. Some of these implementations were based off other existing repositories, while some were implemented completely by ourselves. We worked for four weeks full-time on reproducing the results with two GPUs available for use.
We were successfully able to generate attention maps using the method described by Liu et al. and could apply them to anomaly detection as well. For the MNIST experiments, this lead to results that were similar to the paper. However, for the UCSD-Ped1 experiments, the author's explainable VAE model actually performed worse than our own baseline. Moreover, we were not able to support the author's claim that they achieve state-of-the-art on the MVTec dataset. Finally, for the latent space disentanglement, our found results were not as good as claimed by \citeauthor{liu2020towards}, but they still out-performed the set baseline, as was also claimed by the authors.
Running the initial implementation of the authors, their code was relatively straightforward. We were therefore able to generate attention maps and anomaly detections for the MNIST dataset using a Variational Autoencoder without too much difficulties.
However, the code of the authors covered only a small portion of the paper and extending this to the whole paper was very difficult, as the paper was not often very clear on the implementation details. Adding in certain metrics for evaluation turned out to be relatively hard as well.
We contacted the authors by email, as provided in the paper and on Github, but were not answered. Another group within our course working on the same paper did get a response, that way we got some additional insights.
Paper Url: https://openreview.net/forum?id=CrORjXGxoNk¬eId=7nlxondNvck&referrer=%5BML%20Reproducibility%20Challenge%202020%5D(%2Fgroup%3Fid%3DML_Reproducibility_Challenge%2F2020)
4 Replies
Loading