Unsupervised Conditional Generation using noise engineered mode matching GANDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Conditional generation refers to the process of sampling from an unknown distribution conditioned on semantics of the data. This can be achieved by augmenting the generative model with the desired semantic labels, albeit it is not straightforward in an unsupervised setting where the semantic label of every data sample is unknown. In this paper, we address this issue by proposing a method that can generate samples conditioned on the properties of a latent distribution engineered in accordance with a certain data prior. In particular, a latent space inversion network is trained in tandem with a generative adversarial network such that the modal properties of the latent space distribution are induced in the data generating distribution. We demonstrate that our model, despite being fully unsupervised, is effective in learning meaningful representations through its mode matching property. We validate our method on multiple unsupervised tasks such as conditional generation, dataset attribute discovery and inference using three real world image datasets namely MNIST, CIFAR-10 and CELEB-A and show that the results are comparable to the state-of-the-art methods.
Keywords: Noise engineered GAN, Latent space engineering, Mode matching, Unsupervised learning
TL;DR: A GAN model where an inversion mapping from the generated data space to an engineered latent space is learned such that properties of the data generating distribution are matched to those of the latent distribution.
11 Replies

Loading