Abstract: Most deep latent factor models choose simple priors for simplicity, tractability
or not knowing what prior to use. Recent studies show that the choice of
the prior may have a profound effect on the expressiveness of the model,
especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders
(AAEs). We introduce the notion of code generators to transform manually selected
simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than
AAEs in both supervised and unsupervised settings. Lastly, we present its
ability to do cross-domain translation in a text-to-image synthesis task.
TL;DR: Learning Priors for Adversarial Autoencoders
Keywords: deep learning, computer vision, generative adversarial networks
10 Replies
Loading