Negative Sampling in Variational AutoencodersDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: Pulling near-manifold examples (utilizing an auxiliary dataset or generated samples) to a secondary prior improves the discriminative power of VAE models regarding out-of-distribution samples.
Abstract: We propose negative sampling as an approach to improve the notoriously bad out-of-distribution likelihood estimates of Variational Autoencoder models. Our model pushes latent images of negative samples away from the prior. When the source of negative samples is an auxiliary dataset, such a model can vastly improve on baselines when evaluated on OOD detection tasks. Perhaps more surprisingly, we present a fully unsupervised variant that can also significantly improve detection performance: using the output of the generator as a source of negative samples results in a fully unsupervised model that can be interpreted as adversarially trained.
Keywords: Variational Autoencoder, generative modelling, out-of-distribution detection
Original Pdf: pdf
9 Replies

Loading