Reviewed Version (pdf): https://openreview.net/references/pdf?id=d4lomlk_Y0
Keywords: Generative Models, Video Generation, Video Forecasting, Autoregressive Models, VQVAE, Computer Vision
Abstract: In recent years, the task of video prediction---forecasting future video given past video frames---has attracted attention in the research community. In this paper we propose a novel approach to this problem with Vector Quantized Variational AutoEncoders (VQ-VAE). With VQ-VAE we compress high-resolution videos into a hierarchical set of multi-scale discrete latent variables. Compared to pixels, this compressed latent space has dramatically reduced dimensionality, allowing us to apply scalable autoregressive generative models to predict video. In contrast to previous work that has largely emphasized highly constrained datasets, we focus on very diverse, large-scale datasets such as Kinetics-600. We predict video at a higher resolution, 256$\times$256, than any other previous method to our knowledge. We further validate our approach against prior work via a crowdsourced human evaluation.
One-sentence Summary: We propose a two-stage model based on VQVAE to forecast video on the Kinetics dataset at a higher resolution than ever before.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics