Seeing the whole picture instead of a single point: Self-supervised likelihood learning for deep generative modelsDownload PDF

16 Oct 2019 (modified: 06 Dec 2019)AABI 2019 Symposium Blind SubmissionReaders: Everyone
  • Keywords: Variational autoencoders, Semantic Likelihood, Self-supervised Learning
  • Abstract: Recent findings show that deep generative models can judge out-of-distribution samples as more likely than those drawn from the same distribution as the training data. In this work, we focus on variational autoencoders (VAEs) and address the problem of misaligned likelihood estimates on image data. We develop a novel likelihood function that is based not only on the parameters returned by the VAE but also on the features of the data learned in a self-supervised fashion. In this way, the model additionally captures the semantic information that is disregarded by the usual VAE likelihood function. We demonstrate the improvements in reliability of the estimates with experiments on the FashionMNIST and MNIST datasets.
  • TL;DR: Improved likelihood estimates in variational autoencoders using self-supervised feature learning
0 Replies