A Case for Object Compositionality in Deep Generative Models of ImagesDownload PDF

27 Sept 2018 (modified: 14 Oct 2024)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Deep generative models seek to recover the process with which the observed data was generated. They may be used to synthesize new samples or to subsequently extract representations. Successful approaches in the domain of images are driven by several core inductive biases. However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked. In this work we propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition. This provides a way to efficiently learn a more accurate generative model of real-world images, and serves as an initial step towards learning corresponding object representations. We evaluate our approach on several multi-object image datasets, and find that the generator learns to identify and disentangle information corresponding to different objects at a representational level. A human study reveals that the resulting generative model is better at generating images that are more faithful to the reference distribution.
Keywords: Objects, Compositionality, Generative Models, GAN, Unsupervised Learning
TL;DR: We propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CLEVR](https://paperswithcode.com/dataset/clevr)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/a-case-for-object-compositionality-in-deep/code)
10 Replies

Loading