When Do Variational Autoencoders Know What They Don't Know?Download PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: variational autoencoder, generative model
Abstract: Recently, the ability of deep generative models to detect outliers has been called into question because of the demonstration that they frequently assign higher probability density to samples from completely different data sets than were used for training. For example, a model trained on CIFAR-10 may counter-intuitively attribute higher likelihood to samples obtained from SVHN. In this work, we closely examine this phenomena in the specific context of variational autoencoders, a commonly-used approach for anomaly detection. In particular, we demonstrate that VAEs, when appropriately designed and trained, are in fact often proficient in differentiating inlier and outlier distributions, e.g., FashionMNIST vs MNIST, CIFAR-10 vs SVHN and CelebA. We describe various mechanisms that mitigate this capability, including the paradoxical necessity of large or unbounded gradients, which have sometimes been observed to occur during training of VAE models.
Original Pdf: pdf
5 Replies

Loading