A Good Image Generator Is What You Need for High-Resolution Video SynthesisDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 SpotlightReaders: Everyone
Keywords: high-resolution video generation, contrastive learning, cross-domain video generation
Abstract: Image and video synthesis are closely related areas aiming at generating content from noise. While rapid progress has been demonstrated in improving image-based models to handle large resolutions, high-quality renderings, and wide variations in image content, achieving comparable video generation results remains problematic. We present a framework that leverages contemporary image generators to render high-resolution videos. We frame the video synthesis problem as discovering a trajectory in the latent space of a pre-trained and fixed image generator. Not only does such a framework render high-resolution videos, but it also is an order of magnitude more computationally efficient. We introduce a motion generator that discovers the desired trajectory, in which content and motion are disentangled. With such a representation, our framework allows for a broad range of applications, including content and motion manipulation. Furthermore, we introduce a new task, which we call cross-domain video synthesis, in which the image and motion generators are trained on disjoint datasets belonging to different domains. This allows for generating moving objects for which the desired video data is not available. Extensive experiments on various datasets demonstrate the advantages of our methods over existing video generation techniques. Code will be released at https://github.com/snap-research/MoCoGAN-HD.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
One-sentence Summary: Reuse a pre-trained image generator for high-resolution video synthesis
Code: [![github](/images/github_icon.svg) snap-research/MoCoGAN-HD](https://github.com/snap-research/MoCoGAN-HD)
Data: [FFHQ](https://paperswithcode.com/dataset/ffhq), [FaceForensics](https://paperswithcode.com/dataset/faceforensics), [UCF101](https://paperswithcode.com/dataset/ucf101), [VoxCeleb1](https://paperswithcode.com/dataset/voxceleb1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2104.15069/code)
10 Replies

Loading