Impact of the latent space on the ability of GANs to fit the distribution

Anonymous

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • Keywords: Deep Learning, Generative Adversarial Networks, Compression, Perceptual Quality
  • TL;DR: We analyze the impact of the latent space of fully trained generators by pseudo inverting them.
  • Abstract: The goal of generative models is to model the underlying data distribution of a sample based dataset. Our intuition is that an accurate model should in principle also include the sample based dataset as part of its induced probability distribution. To investigate this, we look at fully trained generative models using the Generative Adversarial Networks (GAN) framework and analyze the resulting generator on its ability to memorize the dataset. Further, we show that the size of the initial latent space is paramount to allow for an accurate reconstruction of the training data. This gives us a link to compression theory, where Autoencoders (AE) are used to lower bound the reconstruction capabilities of our generative model. Here, we observe similar results to the perception-distortion tradeoff (Blau & Michaeli (2018)). Given a small latent space, the AE produces low quality and the GAN produces high quality outputs from a perceptual viewpoint. In contrast, the distortion error is smaller for the AE. By increasing the dimensionality of the latent space the distortion decreases for both models, but the perceptual quality only increases for the AE.
0 Replies

Loading