Diffusion Models without Attention

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Diffusion models, State space models, Image Generation, Long-range Architecture
Abstract: Advances in high-fidelity image generation have been spearheaded by denoising diffusion probabilistic models (DDPMs). However, there remain considerable computational challenges when scaling current DDPM architectures to high-resolutions, due to the use of attention either in UNet architectures or Transformer variants. To make models tractable, it is common to employ lossy compression techniques in hidden space, such as patchifying, which trade-off representational capacity for efficiency. We propose Diffusion State Space Model (DiffuSSMs), an architecture that replaces attention with a more efficient state space model backbone. The model avoids global compression which enables longer, more fine-grained image representation in the diffusion process. Comprehensive validation on ImageNet indicates superior performance in terms of FiD and Inception Score at reduced total FLOP usage compared to previous diffusion models using attention.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4506
Loading