Diverse Video Generation using a Gaussian Process TriggerDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: video synthesis, future frame generation, video generation, gaussian process priors, diverse video generation
Abstract: Generating future frames given a few context (or past) frames is a challenging task. It requires modeling the temporal coherence of videos as well as multi-modality in terms of diversity in the potential future states. Current variational approaches for video generation tend to marginalize over multi-modal future outcomes. Instead, we propose to explicitly model the multi-modality in the future outcomes and leverage it to sample diverse futures. Our approach, Diverse Video Generator, uses a GP to learn priors on future states given the past and maintains a probability distribution over possible futures given a particular sample. We leverage the changes in this distribution over time to control the sampling of diverse future states by estimating the end of on-going sequences. In particular, we use the variance of GP over the output function space to trigger a change in the action sequence. We achieve state-of-the-art results on diverse future frame generation in terms of reconstruction quality and diversity of the generated sequences.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: Diverse future frame synthesis by modeling the diversity of future states using a Gaussian Process, and using Bayesian inference to sample diverse future states.
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) shgaurav1/DVG](https://github.com/shgaurav1/DVG)
Data: [BAIR Robot Pushing](https://paperswithcode.com/dataset/bair-robot-pushing), [Human3.6M](https://paperswithcode.com/dataset/human3-6m), [KTH](https://paperswithcode.com/dataset/kth)
8 Replies

Loading