LMVP: Video Predictor with Leaked Motion Information

Dong Wang, Yitong Li, Wei Cao, Qi Wei, Liqun Chen, Lawrence Carin

Sep 30, 2018 NIPS 2018 Workshop Spatiotemporal Blind Submission readers: everyone
  • Keywords: video prediction, spatio-temporal modeling
  • Abstract: We propose a Leaked Motion Video Predictor (LMVP) to predict future frames by capturing the spatial and temporal dependencies from given inputs. The motion is modeled by a newly proposed component, motion guider, which plays the role of both learner and teacher. Specifically, it learns the temporal features from real data and guides the generator to predict future frames. The spatial consistency in video is modeled by an adaptive filtering network. To further ensure the spatio-temporal consistency of the prediction, a discriminator is also adopted to distinguish the real and generated frames. Further, the discriminator leaks information to the motion guider and the generator to help the learning of motion. The proposed LMVP can effectively learn the static and temporal features in videos without the need for human labeling. Experiments on synthetic and real data demonstrate that LMVP can yield state-of-the-art results.
0 Replies

Loading