Illiterate DALL-E Learns to ComposeDownload PDF

29 Sept 2021, 00:35 (edited 24 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Zero-Shot Image Generation, Compositional Representation, Object-Centric Representation, Out-of-Distribution Generalization, Image Transformers
  • Abstract: Although DALL-E has shown an impressive ability of composition-based systematic generalization in image generation, it requires the dataset of text-image pairs and the compositionality is provided by the text. In contrast, object-centric representation models like the Slot Attention model learn composable representations without the text prompt. However, unlike DALL-E, its ability to systematically generalize for zero-shot generation is significantly limited. In this paper, we propose a simple but novel slot-based autoencoding architecture, called SLATE, for combining the best of both worlds: learning object-centric representations that allow systematic generalization in zero-shot image generation without text. As such, this model can also be seen as an illiterate DALL-E model. Unlike the pixel-mixture decoders of existing object-centric representation models, we propose to use the Image GPT decoder conditioned on the slots for capturing complex interactions among the slots and pixels. In experiments, we show that this simple and easy-to-implement architecture not requiring a text prompt achieves significant improvement in in-distribution and out-of-distribution (zero-shot) image generation and qualitatively comparable or better slot-attention structure than the models based on mixture decoders.
  • One-sentence Summary: To learn compositional slot-based representation of an image and perform slot composition for zero-shot novel image generation.
19 Replies