Masked Autoencoders As Spatiotemporal LearnersDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 13 Jan 2023, 21:18NeurIPS 2022 AcceptReaders: Everyone
TL;DR: MAE learns strong video representations with minimal inductive bias. Masking ratio can be very high and dramatically improves efficiency. Encouraging and rare results of visual pre-training on real-world, uncurated data.
Abstract: This paper studies a conceptually simple extension of Masked Autoencoders (MAE) to spatiotemporal representation learning from videos. We randomly mask out spacetime patches in videos and learn an autoencoder to reconstruct them in pixels. Interestingly, we show that our MAE method can learn strong representations with almost no inductive bias on spacetime (only except for patch and positional embeddings), and spacetime-agnostic random masking performs the best. We observe that the optimal masking ratio is as high as 90% (vs. 75% on images), supporting the hypothesis that this ratio is related to information redundancy of the data. A high masking ratio leads to a large speedup, e.g., > 4x in wall-clock time or even more. We report competitive results on several challenging video datasets using vanilla Vision Transformers. We observe that MAE can outperform supervised pre-training by large margins. We further report encouraging results of training on real-world, uncurated Instagram data. Our study suggests that the general framework of masked autoencoding (BERT, MAE, etc.) can be a unified methodology for representation learning with minimal domain knowledge.
Supplementary Material: pdf
16 Replies

Loading