Communication-Efficient Diffusion Denoising Parallelization via Reuse-then-Predict Mechanism

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Models, Parallelism, Efficient Communication
TL;DR: We propose ParaStep, a step-wise parallelization method based on a reuse-then-predict mechanism that parallelizes diffusion inference by exploiting adjacent-step similarity, reducing latency with minimal quality loss.
Abstract: Diffusion models have emerged as a powerful class of generative models across various modalities, including image, video, and audio synthesis. However, their deployment is often limited by significant inference latency, primarily due to the inherently sequential nature of the denoising process. While existing parallelization strategies attempt to accelerate inference by distributing computation across multiple devices, they typically incur high communication overhead, hindering deployment on commercial hardware. To address this challenge, we propose $\textbf{ParaStep}$, a novel parallelization method based on a reuse-then-predict mechanism that parallelizes diffusion inference by exploiting similarity between adjacent denoising steps. Unlike prior approaches that rely on layer-wise or stage-wise communication, ParaStep employs lightweight, step-wise communication, substantially reducing overhead. ParaStep achieves end-to-end speedups of up to $\textbf{3.88}$$\times$ on SVD, $\textbf{2.43}$$\times$ on CogVideoX-2b, and $\textbf{6.56}$$\times$ on AudioLDM2-large, while maintaining generation quality. These results highlight ParaStep as a scalable and communication-efficient solution for accelerating diffusion inference, particularly in bandwidth-constrained environments.
Primary Area: Infrastructure (e.g., libraries, improved implementation and scalability, distributed solutions)
Submission Number: 6276
Loading