DreamFactory: Pioneering Multi-Scene Long Video Generation with a Multi-Agent Framework

ACL ARR 2024 June Submission4341 Authors

16 Jun 2024 (modified: 09 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Current video generation models excel at creating short, realistic clips, but struggle with longer, multi-scene videos. We introduce \texttt{DreamFactory}, an LLM-based framework that tackles this challenge. \texttt{DreamFactory} leverages multi-agent collaboration principles and a Key Frames Iteration Design Method to ensure consistency and style across long videos. It utilizes Chain of Thought (COT) to address uncertainties inherent in large language models. \texttt{DreamFactory} generates long, stylistically coherent, and complex videos. Evaluating these long-form videos presents a challenge. We propose novel metrics such as Cross-Scene Face Distance Score and Cross-Scene Style Consistency Score. To further research in this area, we contribute the Multi-Scene Videos Dataset containing over 150 human-rated videos. \texttt{DreamFactory} paves the way for utilizing multi-agent systems in video generation. We will make our framework and datasets public after paper acceptance.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: multimodal applications
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Data resources, Position papers
Languages Studied: English
Submission Number: 4341
Loading