Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: Adaptive Diffusion, Diffusion Probabilistic Models, Third-order Difference
TL;DR: We explore the training-free diffusion acceleration that dynamically selects the denoising path according to given prompts. We design the third-order estimator to indicate the computation redundancy.
Abstract: Diffusion models have recently achieved great success in the synthesis of high-quality images and videos. However, the existing denoising techniques in diffusion models are commonly based on step-by-step noise predictions, which suffers from high computation cost, resulting in a prohibitive latency for interactive applications. In this paper, we propose AdaptiveDiffusion to relieve this bottleneck by adaptively reducing the noise prediction steps during the denoising process. Our method considers the potential of skipping as many noise prediction steps as possible while keeping the final denoised results identical to the original full-step ones. Specifically, the skipping strategy is guided by the third-order latent difference that indicates the stability between timesteps during the denoising process, which benefits the reusing of previous noise prediction results. Extensive experiments on image and video diffusion models demonstrate that our method can significantly speed up the denoising process while generating identical results to the original process, achieving up to an average 2-5x speedup without quality degradation. The code is available at https://github.com/UniModal4Reasoning/AdaptiveDiffusion
Supplementary Material: zip
Primary Area: Diffusion based models
Submission Number: 230
Loading