TL;DR: We propose a bridge-based data-to-data generation process for image-to-video (I2V) synthesis, and present two improving techniques for bridge-based I2V models, achieving superior quality than previous ones based on diffusion models.
Abstract: Diffusion models have achieved remarkable progress on image-to-video (I2V) generation, while their noise-to-data generation process is inherently mismatched with this task, which may lead to suboptimal synthesis quality. In this work, we present FrameBridge. By modeling the frame-to-frames generation process with a bridge model based data-to-data generative process, we are able to fully exploit the information contained in the given image and improve the consistency between the generation process and I2V task.
Moreover, we propose two novel techniques toward the two popular settings of training I2V models, respectively. Firstly, we propose SNR-Aligned Fine-tuning (SAF), making the first attempt to fine-tune a diffusion model to a bridge model and, therefore, allowing us to utilize the pre-trained diffusion-based text-to-video (T2V) models. Secondly, we propose neural prior, further improving the synthesis quality of FrameBridge when training from scratch. Experiments conducted on WebVid-2M and UCF-101 demonstrate the superior quality of FrameBridge in comparison with the diffusion counterpart (zero-shot FVD 95 vs. 192 on MSR-VTT and non-zero-shot FVD 122 vs. 171 on UCF-101), and the advantages of our proposed SAF and neural prior for bridge-based I2V models. The project page: https://framebridge-icml.github.io/
Lay Summary: We propose FrameBridge, an image-to-video (I2V) model which can generate videos with consistent content of a given image. Different from previous diffusion-based methods, which start the generation process with uninformative Gaussian noise, FrameBridge is a bridge-based model and the sampling process can be started from the given image which has provided structural prior information. Our model can be either fine-tuned from a pre-trained video diffusion model to save computational resources or trained from scratch, and we propose techniques (namely SNR-Aligned Fine-tuning and neural prior) to further enhance the performance in these two scenarios.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Image-to-Video Generation, Diffusion Models, Diffusion Bridge Models, Prior Distribution, Data-to-Data Generation
Submission Number: 5342
Loading