Keywords: Video Motion Transfer; Efficiency; Diffusion model;
TL;DR: We emphasize the motion redundancy and gradient redundancy existing in the training-free motion transfer task and propose FastVMT, an efficient framework using DiT-based video generative model to transfer motion efficiently.
Abstract: Video motion transfer aims to synthesize videos by generating visual content according to a text prompt while transferring the motion pattern observed in a reference video. Recent methods predominantly use the Diffusion Transformer (DiT) architecture. To achieve satisfactory runtime, several methods attempt to accelerate the computations in the DiT, but fail to address structural sources of inefficiency. In this work, we identify and remove two types of computational redundancy in earlier work: **motion redundancy** arises because the generic DiT architecture does not reflect the fact that frame-to-frame motion is small and smooth; **gradient redundancy** occurs if one ignores that gradients change slowly along the diffusion trajectory. To mitigate motion redundancy, we mask the corresponding attention layers to a local neighborhood such that interaction weights are not computed unnecessarily distant image regions. To exploit gradient redundancy, we design an optimization scheme that reuses gradients from previous diffusion steps and skips unwarranted gradient computations. On average, FastVMT achieves a 3.43× speedup without degrading the visual fidelity or the temporal consistency of the generated videos.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 601
Loading