On Unifying Deep Generative ModelsDownload PDF

15 Feb 2018 (modified: 25 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.
TL;DR: A unified statistical view of the broad class of deep generative models
Keywords: deep generative models, generative adversarial networks, variational autoencoders, variational inference
8 Replies

Loading