- Keywords: controllable image generation, text-to-image synthesis, generative model, generative adversarial network, gan
- TL;DR: Extend GAN architecture to obtain control over locations and identities of multiple objects within generated images.
- Abstract: Recent improvements to Generative Adversarial Networks (GANs) have made it possible to generate realistic images in high resolution based on natural language descriptions such as image captions. Furthermore, conditional GANs allow us to control the image generation process through labels or even natural language descriptions. However, fine-grained control of the image layout, i.e. where in the image specific objects should be located, is still difficult to achieve. This is especially true for images that should contain multiple distinct objects at different spatial locations. We introduce a new approach which allows us to control the location of arbitrarily many objects within an image by adding an object pathway to both the generator and the discriminator. Our approach does not need a detailed semantic layout but only bounding boxes and the respective labels of the desired objects are needed. The object pathway focuses solely on the individual objects and is iteratively applied at the locations specified by the bounding boxes. The global pathway focuses on the image background and the general image layout. We perform experiments on the Multi-MNIST, CLEVR, and the more complex MS-COCO data set. Our experiments show that through the use of the object pathway we can control object locations within images and can model complex scenes with multiple objects at various locations. We further show that the object pathway focuses on the individual objects and learns features relevant for these, while the global pathway focuses on global image characteristics and the image background.
- Code: [![github](/images/github_icon.svg) tohinz/multiple-objects-gan](https://github.com/tohinz/multiple-objects-gan)
- Data: [CLEVR](https://paperswithcode.com/dataset/clevr), [COCO](https://paperswithcode.com/dataset/coco), [SHAPES](https://paperswithcode.com/dataset/shapes-1)