TL;DR: A model to control the generation of images with GAN and beta-VAE with regard to scale and position of the objects
Abstract: Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing. Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation. To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models. In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image. Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations. We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders.
Keywords: Generative models, factor of variation, GAN, beta-VAE, interpretable representation, interpretability
Code: [![github](/images/github_icon.svg) AntoinePlumerault/Controlling-generative-models-with-continuous-factors-of-variations](https://github.com/AntoinePlumerault/Controlling-generative-models-with-continuous-factors-of-variations)
Data: [dSprites](https://paperswithcode.com/dataset/dsprites)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2001.10238/code)
Original Pdf: pdf
8 Replies
Loading