Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Diffusion models have demonstrated remarkable performance in generating unimodal data across various tasks, including image, video, and text generation. On the contrary, the joint generation of multimodal data through diffusion models is still in the early stages of exploration. Existing approaches heavily rely on external preprocessing protocols, such as tokenizers and variational autoencoders, to harmonize varied data representations into a unified, unimodal format. This process heavily demands the high accuracy of encoders and decoders, which can be problematic for applications with limited data. To lift this restriction, we propose a novel framework for building multimodal diffusion models on arbitrary state spaces, enabling native generation of coupled data across different modalities. By introducing an innovative decoupled noise schedule for each modality, we enable both unconditional and modality-conditioned generation within a single model simultaneously. We empirically validate our approach for text-image generation and mixed-type tabular data synthesis, demonstrating that it achieves competitive performance.
Lay Summary: Many AI tools today can generate impressive results — like realistic images, videos, or text — but usually only focus on one type of data at a time. Creating models that can generate multiple types of data together, such as matching text with images, is still a big challenge. Current methods often rely on complex tools to convert different types of data into a single format the model can understand. But these tools can struggle, especially when there isn't much training data. Our research introduces a new way to train AI systems to work directly with different data types — like text and images — without needing to squeeze them into the same format first. We developed a method that lets the model handle each type of data in its own way, which helps the model learn more naturally. This makes it possible to generate, for example, an image from a caption or even generate both together from scratch. We tested our method and found that it works well across different types of data. This could help build more flexible and reliable AI systems that understand and generate information more like humans do — across multiple forms, not just one.
Link To Code: https://github.com/KevinRojas1499/Diffuse-Everything
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Diffusion Model, Discrete Diffusion, Multimodal, Generative Modeling
Submission Number: 14606
Loading