$\textit{VIA}$: Unified Spatiotemporal $\underline{Vi}$deo $\underline{A}$daptation for Global and Local Video Editing

TMLR Paper4944 Authors

24 May 2025 (modified: 29 May 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Video editing serves as a fundamental pillar of digital media, spanning applications in entertainment, education, and professional communication. However, previous methods often overlook the necessity of comprehensively understanding both global and local contexts, leading to inaccurate and inconsistent edits in the spatiotemporal dimension, especially for long videos. In this paper, we introduce $\textit{VIA}$, a unified spatiotemporal $\underline{VI}$deo $\underline{A}$daptation framework for global and local video editing, pushing the limits of consistently editing minute-long videos. First, to ensure local consistency within individual frames, we designed \emph{test-time editing adaptation} to adapt a pre-trained image editing model for improving consistency between potential editing directions and the text instruction, and adapt masked latent variables for precise local control. Furthermore, to maintain global consistency over the video sequence, we introduce \emph{spatiotemporal adaptation} that recursively \textbf{gathers} consistent attention variables in key frames and strategically applies them across the whole sequence to realize the editing effects. Extensive experiments demonstrate that, compared to baseline methods, our $\textit{VIA}$ approach produces edits that are more faithful to the source videos, more coherent in the spatiotemporal context, and more precise in local control. More importantly, we show that $\textit{VIA}$ can achieve consistent long video editing in minutes, unlocking the potential for advanced video editing tasks over long video sequences.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Liang-Chieh_Chen1
Submission Number: 4944
Loading