X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal TransformersDownload PDF

13 Jun 2020 (modified: 15 Sept 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: text-to-image, vision, nlp, vision and language, multi-modal, LXMERT, X-LXMERT, transformer, visual question answering, visual reasoning, image captioning, image generation, text-to-image synthesis, text-to-image generation, machine learning
TL;DR: We introduce X-LXMERT, a multi-modal transformer model that can perform text-to-image generation, image captioning and visual question answering.
Abstract: Mirroring the success of masked language models, vision-and-language counterparts like ViLBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative tasks like visual question answering and visual grounding. Recent work has also successfully adapted such models towards the generative task of image captioning. This begs the question: Can these models go the other way and generate images from pieces of text? Our analysis of a popular representative from this model family - LXMERT - finds that it is unable to generate rich and semantically meaningful imagery with its current training setup. We introduce X-LXMERT, an extension to LXMERT with training refinements including: discretizing visual representations, using uniform masking with a large range of masking ratios and aligning the right pre-training datasets to the right objectives which enables it to paint. X-LXMERT's image generation capabilities rival state of the art generative models while its question answering and captioning abilities remains comparable to LXMERT.
1 Reply

Loading