Abstract: Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian.
After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample.
In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation.
Our experimental results validate that the proposed operations give higher quality samples compared to the original operations.
TL;DR: Operations in the GAN latent space can induce a distribution mismatch compared to the training distribution, and we address this using optimal transport to match the distributions.
Keywords: Generative Models, GANs, latent space operations, optimal transport
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [LSUN](https://paperswithcode.com/dataset/lsun)
8 Replies
Loading