Does Semantic Noise Initialization Transfer from Images to Videos? A Paired Diagnostic Study

Published: 02 Mar 2026, Last Modified: 03 Mar 2026ICLR 2026 Workshop MM Intelligence PosterEveryoneRevisionsCC BY 4.0
Track: tiny paper (up to 4 pages)
Keywords: Text-to-Video Generation; Diffusion Models; Noise Initialization
TL;DR: Paired evaluation on VideoCrafter reveals that semantic noise initialization yields statistically insignificant improvements p =0.17, highlighting the necessity of rigorous testing in noise-space studies.
Abstract: Semantic noise initialization has been reported to improve robustness and controllability in image diffusion models. Whether these gains transfer to text-to-video (T2V) generation remains unclear, since temporal coupling can introduce extra degrees of freedom and instability. We benchmark semantic noise initialization against standard Gaussian noise using a frozen VideoCrafter-style T2V diffusion backbone and VBench on 100 prompts. Using prompt-level paired tests with bootstrap confidence intervals and a sign-flip permutation test, we observe a small positive trend on temporal-related dimensions; however, the 95\% confidence interval includes zero ($p \approx 0.17$) and the overall score remains on par with the baseline. To understand this outcome, we analyze the induced perturbations in noise space and find patterns consistent with weak or unstable signal. We recommend prompt-level paired evaluation and noise space diagnostics as standard practice when studying initialization schemes for T2V diffusion. Our code and evaluation scripts are available at: https://anonymous.4open.science/r/golden-noise-transfer-8CB7/
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 13
Loading