Compound Tokens: Channel Fusion for Vision-Language Representation LearningDownload PDF

01 Mar 2023 (modified: 04 May 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: multi-modal fusion, vision-language model
TL;DR: We provide a new multi-modal fusion method that concatenates tokens along the channel dimension yielding strong representations for several vision question answering tasks.
Abstract: We present an effective method for fusing visual-and-language representations for several question answering tasks including visual question answering and visual entailment. In contrast to prior works that concatenate unimodal representations or use only cross-attention, we compose multimodal representations via channel fusion. By fusing on the channels, the model is able to more effectively align the tokens compared to standard methods. These multimodal representations, which we call compound tokens are generated with cross-attention transformer layers. We demonstrate the effectiveness of compound tokens using an encoder-decoder vision-language model trained end-to-end in the open-vocabulary setting. Compound Tokens achieve highly competitive performance across a range of question answering tasks including GQA, VQA2.0, and SNLI-VE.
3 Replies