Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models.

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: diffusion model, generative AI, image generation, foundation models, higher resolution
TL;DR: Pushing Image Generation at Higher-Resolutions with Foundation Models using a single GPU
Abstract: In this work, we introduce Pixelsmith, a zero-shot text-to-image generative framework to sample images at higher resolutions with a single GPU. We are the first to show that it is possible to scale the output of a pre-trained diffusion model by a factor of 1000, opening the road to gigapixel image generation at no extra cost. Our cascading method uses the image generated at the lowest resolution as baseline to sample at higher resolutions. For the guidance, we introduce the Slider, a mechanism that fuses the overall structure contained in the first-generated image with enhanced fine details. At each inference step, we denoise patches rather than the entire latent space, minimizing memory demands so that a single GPU can handle the process, regardless of the image's resolution. Our experimental results show that this method not only achieves higher quality and diversity compared to existing techniques but also reduces sampling time and ablation artifacts.
Primary Area: Diffusion based models
Submission Number: 5651
Loading