Multi-View Data Generation Without View SupervisionDownload PDF

15 Feb 2018 (modified: 21 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multi-view approaches, our model doesn't need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four images datasets on which we demonstrate the effectiveness of the model and its ability to generalize.
TL;DR: We describe a novel multi-view generative model that can generate multiple views of the same object, or multiple objects in the same view with no need of label on views.
Keywords: multi-view, adversarial learning, generative model
Code: [![github](/images/github_icon.svg) mickaelChen/GMV](https://github.com/mickaelChen/GMV)
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [Oxford 102 Flower](https://paperswithcode.com/dataset/oxford-102-flower)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1711.00305/code)
7 Replies

Loading