Abstract: Video Frame Interpolation (VFI) is a challenging task, especially when scenarios involve large motions. Most existing methods are based on optical flow, which is difficult to predict when large motions exist. Additionally, due to their lack of prior image knowledge, they tend to generate intermediate frames with artifacts if the predicted optical flow is wrong. In this paper, we propose a novel method based on a pre-trained latent diffusion model (LDM). Firstly, we freeze most of the parameters to preserve the rich image prior knowledge and powerful generation capabilities of the LDM. Secondly, we inflate our model to handle videos and adopt a multi-scale spatial-temporal attention module to enhance the ability to process large motions. Finally, information from the input frames is utilized to assist in reconstructing details in the output frames, further enhancing the quality of the output frames. The experimental results demonstrate that our method achieves excellent performance in both natural and animated videos with large motions. Specifically, our method achieves state-of-the-art performance on the animated dataset, showcasing remarkable outputs with nearly no artifacts.
Loading