Keywords: Long Video Generation; Video Datasets Caption
Abstract: Generating long videos that can show complex stories, like movie scenes from scripts, has great promise and offers much more than short clips. However, current methods that use autoregression with diffusion models often struggle because their step-by-step process naturally leads to a serious error accumulation (drift). Also, many existing ways to make long videos focus on single, continuous scenes, making them less useful for stories with many events and changes. This paper introduces a new approach to solve these problems. First, we propose a novel way to annotate datasets at the \textbf{frame-level}, providing detailed text guidance needed for making complex, multi-scene long videos. This detailed guidance works with a \textbf{Frame-Level Attention Mechanism} to make sure text and video match closely. For creating the video (inference), we develop \textbf{Parallel Multi-Window Denoising (PMWD)}, a new method that handles a long video as multiple overlapping windows. These windows are processed at the same time (in parallel), and the data in overlapping areas is averaged, which allows information to flow both ways and greatly reduces the error accumulation. A key feature is that each part (frame) within these windows can be guided by its own distinct text prompt. Our training uses \textbf{Diffusion Forcing} to give the model the ability to handle time flexibly, which is needed for these advanced generation methods. We tested our approach on difficult VBench 2.0 benchmarks ("Complex Plots" and "Complex Landscapes") using the WanX2.1-T2V-1.3B model. The results show our method is better at following instructions in complex, changing scenes and creates high-quality long videos. We plan to share our dataset annotation methods and trained models with the research community.
Supplementary Material: zip
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 10134
Loading