Keywords: Generative model; Video diffusion model; Few-step generation
Abstract: Temporally consistent video-to-video generation is critical for applications such as style transfer and upsampling. In this paper, we provide a theoretical analysis of warped noise—a recently proposed technique for training video diffusion models—and show that pairing it with the standard denoising objective implicitly trains models to be equivariant to spatial transformations of the input noise. We term such models EquiVDM. This equivariance enables motion in the input noise to align naturally with motion in the generated video, yielding coherent, high-fidelity outputs without the need for specialized modules or auxiliary losses. A further advantage is sampling efficiency: EquiVDM achieves comparable or superior quality in far fewer sampling steps. When distilled into one-step student models, EquiVDM preserves equivariance and delivers stronger motion controllability and fidelity than distilled non-equivariant baselines. Across benchmarks, EquiVDM consistently outperforms prior methods in motion alignment, temporal consistency, and perceptual quality, while substantially lowering sampling cost.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 23547
Loading