Abstract: We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through novel co-attentional transformer layers. We pretrain out model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- often simply by adding only a single linear layer to the base architecture. We observe improvements of 2-5 percentage points across tasks compared to existing task-specific models and set new state-of-the-art in Visual Commonsense Reasoning (VCR) and Referring Expressions (RefCOCO+). Our work represents a shift away from learning groundings between vision and language as part of task training and towards treating it as a pretrainable and transferable capability.
Code Link: https://github.com/jiasenlu/ViLBert_beta
CMT Num: 16
1 Reply
Loading