Abstract: This paper explores image modeling from the frequency space and introduces DCTdiff, an end-to-end diffusion generative paradigm that efficiently models images in the discrete cosine transform (DCT) space. We investigate the design space of DCTdiff and reveal the key design factors. Experiments on different frameworks (UViT, DiT), generation tasks, and various diffusion samplers demonstrate that DCTdiff outperforms pixel-based diffusion models regarding generative quality and training efficiency. Remarkably, DCTdiff can seamlessly scale up to 512$\times$512 resolution without using the latent diffusion paradigm and beats latent diffusion (using SD-VAE) with only 1/4 training cost. Finally, we illustrate several intriguing properties of DCT image modeling. For example, we provide a theoretical proof of why `image diffusion can be seen as spectral autoregression', bridging the gap between diffusion and autoregressive models. The effectiveness of DCTdiff and the introduced properties suggest a promising direction for image modeling in the frequency space. The code is at https://github.com/forever208/DCTdiff.
Lay Summary: Image generative modeling in the pixel space is expensive, and high-resolution generation mostly operates on the image latent space using an extra Autoencoder.
We developed DCTdiff, a new image generation approach that works in the frequency space using the discrete cosine transform (DCT).
DCTdiff generated images more efficiently and with better quality than the pixel-based and latent-based diffusion models. Importantly, DCTdiff can scale up to $512×512$ image generation without relying on the latent-space model. We also show that image modeling in the DCT space offers many useful properties for various image tasks.
Link To Code: https://github.com/forever208/DCTdiff
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: generative models, diffusion models, image frequency
Submission Number: 2876
Loading