Learning Priors for Adversarial Autoencoders

Anonymous

Nov 07, 2017 (modified: Nov 07, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Most deep latent factors choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when the generation network has limited capacity. In this paper, we propose to learn a proper prior from data for AAE. We introduce the notion of code generators to transform manually selected simple priors into one that can better fit the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAE in both supervised and unsupervised settings. Lastly, we present its ability to do cross domain translation in a text-to-image synthesis task.
  • TL;DR: Learning Priors for Adversarial Autoencoders
  • Keywords: deep learning, computer vision, generative adversarial networks

Loading