LAVA: Language Audio Vision Alignment for Data-Efficient Video Pre-TrainingDownload PDF

26 May 2022, 20:09 (modified: 23 Jul 2022, 02:25)ICML 2022 Pre-training WorkshopReaders: Everyone
Keywords: Video, Self-Supervised, Pre-Training, Transformers, Multi-Modal, Action, Recognition
TL;DR: Contrastive pre-training of transformers on visual, audio and language data from videos
Abstract: Generating representations of video data is of key importance in advancing the field of machine perception. Most current techniques rely on hand-annotated data, which can be difficult to work with, expensive to generate, and hard to scale. In this work, we propose a novel learning approach based on contrastive learning, LAVA, which is capable of learning joint language, audio, and video representations in a self-supervised manner. We pre-train LAVA on the Kinetics 700 dataset using transformer encoders to learn representations for each modality. We then demonstrate that LAVA performs competitively with the current state-of-the-art self-supervised and weakly-supervised techniques on UCF-101 and HMDB-51 video action recognition while using a fraction of the unlabeled data.
0 Replies