Learning Video Representations using Contrastive Bidirectional TransformerDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: Generalized BERT for continuous and cross-modal inputs; state-of-the-art self-supervised video representations.
Abstract: This paper proposes a self-supervised learning approach for video features that results in significantly improved performance on downstream tasks (such as video classification, captioning and segmentation) compared to existing methods. Our method extends the BERT model for text sequences to the case of sequences of real-valued feature vectors, by replacing the softmax loss with noise contrastive estimation (NCE). We also show how to learn representations from sequences of visual features and sequences of words derived from ASR (automatic speech recognition), and show that such cross-modal training (when possible) helps even more.
Keywords: self-supervised learning, video representations, cross-modal learning
Original Pdf: pdf
7 Replies

Loading