- Keywords: multi-view, adversarial learning, generative model
- TL;DR: We describe a novel multi-view generative model that can generate multiple views of the same object, or multiple objects in the same view with no need of label on views.
- Abstract: The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multi-view approaches, our model doesn't need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four images datasets on which we demonstrate the effectiveness of the model and its ability to generalize.
- Code: [![github](/images/github_icon.svg) mickaelChen/GMV](https://github.com/mickaelChen/GMV)
- Data: [CelebA](https://paperswithcode.com/dataset/celeba), [Oxford 102 Flower](https://paperswithcode.com/dataset/oxford-102-flower)