TL;DR: We explore the spectral properties π of modern autoencoders π€ used for image πΌοΈ / video πΉ latent diffusion π¬οΈ training ποΈ and find π that a simple downsampling β¬οΈ regularization π can substantially boost π their downstream π LDM performance ππ₯.
Abstract: Latent diffusion models have emerged as the leading approach for generating high-quality images and videos, utilizing compressed latent representations to reduce the computational burden of the diffusion process. While recent advancements have primarily focused on scaling diffusion backbones and improving autoencoder reconstruction quality, the interaction between these components has received comparatively less attention. In this work, we perform a spectral analysis of modern autoencoders and identify inordinate high-frequency components in their latent spaces, which are especially pronounced in the autoencoders with a large bottleneck channel size. We hypothesize that this high-frequency component interferes with the coarse-to-fine nature of the diffusion synthesis process and hinders the generation quality. To mitigate the issue, we propose scale equivariance: a simple regularization strategy that aligns latent and RGB spaces across frequencies by enforcing scale equivariance in the decoder. It requires minimal code changes and only up to $20$K autoencoder fine-tuning steps, yet significantly improves generation quality, reducing FID by 19% for image generation on ImageNet-1K 256x256 and FVD by at least 44% for video generation on Kinetics-700 17x256x256. The source code is available at https://github.com/snap-research/diffusability.
Lay Summary: In recent years, image πΌοΈ and video πΉ generation models have rapidly advanced π, with both industry π’ and academia π investing heavily πΈ. Most of these models follow the latent diffusion π¬οΈ approach: an autoencoder π€ first compresses images or videos into a smaller latent space π, and then a diffusion model is trained ποΈ to generate samples in that space π§ͺ.
So far, most work has focused on improving π§ the autoencoderβs reconstruction quality π and compression rate π¦. But our work shows π‘ that the choice of autoencoder has a deeper effectβit shapes π§© how well a diffusion model can generate realistic outputs π¨. We call this diffusability β¨: how easy π it is for a diffusion model to learn π to generate in a given representation space π.
Diffusion models build π§± images by gradually refining noise π«οΈ, starting from a blurry outline and adding details βοΈ step by step π. This process tends to struggle π with high-frequency details πΆ (like textures π§΅ or fine edges βοΈ), where errors β can accumulate. Normally, the human eye ποΈ is less sensitive π§ββοΈ to these errors in pixel space π§·. But we found π§ that some autoencoders place more emphasis π£ on high frequencies in their latent spaceβmore than RGB images do π. As a result β οΈ, critical image structures ποΈ get encoded in unstable π₯ high-frequency components, making them harder π΅ for the diffusion model to learn and sample correctly π―.
To address this π οΈ, we introduce a simple training technique π: during autoencoder training, we downsample β¬οΈ the latent representation and require the decoder to still produce a meaningful reconstruction π οΈβ‘οΈπΌοΈ. This encourages π the autoencoder to store important information βΉοΈ in more robust πͺ, low-frequency components π§.
We show π§ͺ that this small change π§ leads to large improvements π. It makes latent spaces more suitable β
for diffusion models, improving both image πΌοΈ and video πΉ generation quality π― on benchmarks like ImageNet π§ and Kinetics πββοΈ.
Link To Code: https://github.com/snap-research/diffusability
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: autoencoders, latent diffusion, image generation, video generation, DCT, diffusability, fourier transform
Submission Number: 5299
Loading