CogView2: Faster and Better Text-to-Image Generation via Hierarchical TransformersDownload PDF

Published: 31 Oct 2022, Last Modified: 23 Jan 2025NeurIPS 2022 AcceptReaders: Everyone
Keywords: text-to-image generation, pretraining, transformer
TL;DR: Faster and Better Text-to-Image Generation via Hierarchical Transformers
Abstract: Development of transformer-based text-to-image models is impeded by its slow generation and complexity, for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel autoregressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, a cross-modal general language model (CogLM), and fine-tune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.
Supplementary Material: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/cogview2-faster-and-better-text-to-image/code)
13 Replies

Loading