Reproducing towards visually explaining variational autoencodersDownload PDF

31 Jan 2021 (modified: 05 May 2023)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
Keywords: Variational autoencoders, anomaly detection, latent space disentanglement, attention maps
Abstract: The paper by Liu et al. (2020) claims to develop a new technique that is capable of visually explaining Variational Autoencoders (VAEs). Additionally, these explanation maps can support simple models to get state-of-the-art performance in anomaly detection and localization tasks. Another claim they make is that using these attention maps as trainable constraints leads to improved latent space disentanglement. The validity of these claims will be tested by reproducing the reported experiments and comparing the outcomes with the ones of Liu et al (2020). To reproduce the experiments, where available, the original code provided by the authors is used. If the parameterization is not reported in the paper, the default parameters are applied. For the majority of the experiments however, no code is provided by the authors. Overall, the qualitative results attained in this reproduction study are comparable to the results given in the original paper. Showing that the attention maps highlight the anomalies in the images. However, the quantitative results do not match the original paper, as they score lower on both the AUROC and IOU metric for the anomaly detection. Also, the reconstruction of the AD-FactorVAE was not successful, thus no results for this part were obtained.
Paper Url: https://openreview.net/group?id=ML_Reproducibility_Challenge/2020#submissions
4 Replies

Loading