On the relationship between Normalising Flows and Variational- and Denoising AutoencodersDownload PDF

Published: 03 May 2019, Last Modified: 05 May 2023DeepGenStruct 2019Readers: Everyone
Keywords: variational autoencoders, denoising variational autoencoders, normalizing flows, generative modelling, image synthesis, denoising autoencoders, VAE, DAE, VDAE, NF
TL;DR: We explore the relationship between Normalising Flows and Variational- and Denoising Autoencoders, and propose a novel model that generalises them.
Abstract: Normalising Flows (NFs) are a class of likelihood-based generative models that have recently gained popularity. They are based on the idea of transforming a simple density into that of the data. We seek to better understand this class of models, and how they compare to previously proposed techniques for generative modeling and unsupervised representation learning. For this purpose we reinterpret NFs in the framework of Variational Autoencoders (VAEs), and present a new form of VAE that generalises normalising flows. The new generalised model also reveals a close connection to denoising autoencoders, and we therefore call our model the Variational Denoising Autoencoder (VDAE). Using our unified model, we systematically examine the model space between flows, variational autoencoders, and denoising autoencoders, in a set of preliminary experiments on the MNIST handwritten digits. The experiments shed light on the modeling assumptions implicit in these models, and they suggest multiple new directions for future research in this space.
3 Replies

Loading