everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Multimedia generation approaches occupy a prominent place in artificial intelligence research. Text-to-image models achieved high quality results over the last years, however video synthesis methods recently started to develop. In this paper we present a new two-stage latent diffusion video generation architecture using a new MoVQ video decoding scheme. The first stage concerns keyframes synthesis, while the second one is devoted to interpolated frames generation. We compare two temporal conditioning approaches during evaluation and show the improvement of using temporal blocks over temporal layers in terms of IS and CLIPSIM metrics reflecting video generation quality aspects. We also evaluate different configurations of MoVQ-based video decoding scheme to achieve higher PSNR, SSIM, MSE and LPIPS scores. Finally, we compare our pipeline with existing solutions and achieve top-3 CLIPSIM metric score (0.2976).