Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Learning Priors for Adversarial Autoencoders
Nov 07, 2017 (modified: Nov 07, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Most deep latent factors choose simple priors for simplicity, tractability or not
knowing what prior to use. Recent studies show that the choice of the prior may
have a profound effect on the expressiveness of the model, especially when the
generation network has limited capacity. In this paper, we propose to learn a
proper prior from data for AAE. We introduce the notion of code generators to
transform manually selected simple priors into one that can better fit the data distribution.
Experimental results show that the proposed model can generate better
image quality and learn better disentangled representations than AAE in both
supervised and unsupervised settings. Lastly, we present its ability to do cross domain
translation in a text-to-image synthesis task.
TL;DR:Learning Priors for Adversarial Autoencoders