Reproducing Visual Explanations of Variational AutoencodersDownload PDF

31 Jan 2021 (modified: 05 May 2023)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
Abstract: Scope of Reproducibility The papers primary claim is that their proposed method can generate gradient-based attention maps from the latent space of a variational autoencoder (VAE), which is visually demonstrated on the MNIST dataset. Moreover, these attention maps can be used for anomaly detection, which is demonstrated on the UCSD Ped1 and MVTec-AD datasets. Finally, the attention disentanglement loss is proposed, which can be used to improve latent space disentanglement by integrating it into a FactorVAE model, which is shown on the dSprites dataset. Methodology To produce attention maps to localize anomalies for the MNIST dataset the repository of the authors could be used. Additional models were implemented for the USDP Ped1 and MVTec-AD datasets based on the supplementary material provided for the original paper. To reproduce latent space disentanglement results, the AD-FactorVAE model was implemented based on the paper, its' supplement and external source code. Results The attention maps for the MNNIST dataset were successfully replicated. Nevertheless, we failed to reproduce the results for the UCSD Ped1 and MVTec-AD dataset. Furthermore, we were incapable of reproducing the results for the AD-FactorVAE. An explanation for this could be the incorrect aggregation or weighting of the attention disentanglement loss, that we did not train the models for enough epochs or did not use the correct method to produce the attention maps. What was easy Reproducing the attention maps and anomaly localization results for the MNIST dataset was relatively easy. What was difficult Reproducing the anomaly localization results for the UCSD Ped1, and MVTec-AD datasets was difficult. Additionally, reproducing the disentanglement results using the AD-FactorVAE proved to be problematic. Communication with original authors Except for a reply to our initial e-mail updating them of our attempt to reproducing their paper and asking if they would be willing to share more of their code, which they declined, the authors of the paper did not react to any of our questions.
Paper Url: https://openreview.net/forum?id=CrORjXGxoNk
4 Replies

Loading