Keywords: Generative model, diffusion model, representation, discrete
TL;DR: We show that discrete representation of images improve unconditional and compositional generation
Abstract: We argue that diffusion models' success in modeling complex distributions is, for the most part, coming from their conditioning. This paper investigates the representation used to condition diffusion models from the perspective that ideal representations should improve modeling the data distribution, be easy to generate, and be compositional to allow generalizing outside the training distribution. We introduce Discrete Latent Code (DLC), an image representation derived from Simplicial Embeddings trained with a self-supervised learning objective. DLCs are sequences of discrete tokens, as opposed to the standard continuous image embeddings. They are easy to generate and their compositionality enables sampling of novel images beyond the training distribution.
Diffusion models trained with DLCs
improve generation fidelity, establishing a new state-of-the-art for unconditional image generation on ImageNet. Additionally, we show that composing DLCs allows the image generator to produce interesting out-of-distribution samples that coherently combine the semantics of images in diverse ways.
Finally, we showcase how DLCs can enable text-to-image generation by leveraging large-scale pretrained language models. Using only 9M image-caption pairs,
we efficiently finetune a text diffusion model to generate novel DLCs that produces samples outside of the data distribution used to train the image generator.
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 8325
Loading