Sparse Video-Gen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a sparse attention method to accelerate the inference of video diffusion models.
Abstract: Diffusion Transformers (DiTs) dominate video generation but their high computational cost severely limits real-world applicability, usually requiring tens of minutes to generate a few seconds of video even on high-performance GPUs. This inefficiency primarily arises from the quadratic computational complexity of 3D full attention with respect to the context length. In this paper, we propose a training-free framework termed Sparse VideoGen (SVG) that leverages the inherent sparsity in 3D full attention to boost inference efficiency. We reveal that the attention heads can be dynamically classified into two groups depending on distinct sparse patterns: (1) Spatial Head, where only spatially-related tokens within each frame dominate the attention output, and (2) Temporal Head, where only temporally-related tokens across different frames dominate. Based on this insight, SVG proposes an online profiling strategy to capture the dynamic sparse patterns and predicts the type of attention head. Combined with a novel hardware-efficient tensor layout transformation and customized kernel implementations, SVG achieves up to 2.28$\times$ and 2.33$\times$ end-to-end speedup on CogVideoX-v1.5 and HunyuanVideo, respectively, while preserving generation quality. Our code will be open-sourced upon publication.
Lay Summary: AI models can generate high-quality videos, but they’re extremely slow, often taking minutes to produce just a few seconds. We introduce Sparse VideoGen (SVG), a method that speeds up video generation without changing the model or lowering quality. SVG detects when parts of the model focus only on spatial relationships or only on temporal relationships, and skips unnecessary computation. With better data handling and hardware usage, SVG makes leading models 2 times faster while keeping the same video quality.
Link To Code: https://github.com/svg-project/Sparse-VideoGen
Primary Area: Deep Learning->Attention Mechanisms
Keywords: Sparse Attention, Video Diffusion Transformer, Efficient Inference
Submission Number: 9238
Loading