Parameter Efficient Multimodal Transformers for Video Representation LearningDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: Self-supervised learning, audio-visual representation learning, video representation learning
Abstract: The recent success of Transformers in the language domain has motivated adapting it to a multimodal setting, where a new visual model is trained in tandem with an already pretrained language model. However, due to the excessive memory requirements from Transformers, existing work typically fixes the language model and train only the vision module, which limits its ability to learn cross-modal information in an end-to-end manner. In this work, we focus on reducing the parameters of multimodal Transformers in the context of audio-visual video representation learning. We alleviate the high memory requirement by sharing the parameters of Transformers across layers and modalities; we decompose the Transformer into modality-specific and modality-shared parts so that the model learns the dynamics of each modality both individually and together, and propose a novel parameter sharing scheme based on low-rank approximation. We show that our approach reduces parameters of the Transformers up to 97%, allowing us to train our model end-to-end from scratch. We also propose a negative sampling approach based on an instance similarity measured on the CNN embedding space that our model learns together with the Transformers. To demonstrate our approach, we pretrain our model on 30-second clips (480 frames) from Kinetics-700 and transfer it to audio-visual classification tasks.
One-sentence Summary: We propose a technique to reduce the number of parameters in multimodal BERT models up to 97% (from 128 million to 4 million parameters).
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Data: [AudioSet](https://paperswithcode.com/dataset/audioset), [Charades](https://paperswithcode.com/dataset/charades), [ESC-50](https://paperswithcode.com/dataset/esc-50), [Kinetics](https://paperswithcode.com/dataset/kinetics), [Kinetics-700](https://paperswithcode.com/dataset/kinetics-700), [UCF101](https://paperswithcode.com/dataset/ucf101)
9 Replies

Loading