ModeRNN: Harnessing Spatiotemporal Mode Collapse in Unsupervised Predictive LearningDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Predictive Learning, Video Prediction
Abstract: Learning predictive models for unlabeled spatiotemporal data is challenging in part because visual dynamics can be highly entangled in real scenes, making existing approaches prone to overfit partial modes of physical processes while neglecting to reason about others. We name this phenomenon \textit{spatiotemporal mode collapse} and explore it for the first time in predictive learning. The key is to provide the model with a strong inductive bias to discover the compositional structures of latent modes. To this end, we propose ModeRNN, which introduces a novel method to learn structured hidden representations between recurrent states. The core idea of this framework is to first extract various components of visual dynamics using a set of \textit{spatiotemporal slots} with independent parameters. Considering that multiple space-time patterns may co-exist in a sequence, we leverage learnable importance weights to adaptively aggregate slot features into a unified hidden representation, which is then used to update the recurrent states. Across the entire dataset, different modes result in different responses on the mixtures of slots, which enhances the ability of ModeRNN to build structured representations and thus prevents the so-called mode collapse. Unlike existing models, ModeRNN is shown to prevent spatiotemporal mode collapse and further benefit from learning mixed visual dynamics.
One-sentence Summary: This paper studies a new problem as spatiotemporal mode collapse. ModeRNN is designed to reduce complex modes to several spatiotemporal slots. With this design, ModeRNN can break the limitation of mode collapse and benefit from mixed visual dynamics.
12 Replies

Loading