Interleaved Vision-and-Language Generation via Generative Vokens

ACL ARR 2024 June Submission1115 Authors

14 Jun 2024 (modified: 08 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The effectiveness of Multimodal Large Language Models (MLLMs) demonstrates a profound capability in multimodal understanding. However, the simultaneous generation of images with coherent texts is still underdeveloped. Addressing this, we introduce a novel interleaved vision-and-language generation method, centered around the concept of ``generative vokens". These vokens serve as pivotal elements contributing to coherent image-text outputs. Our method is marked by a unique two-stage training strategy for description-free multimodal generation, which does not necessitate extensive descriptions of images. We integrate classifier-free guidance to enhance the alignment of generated images and texts, ensuring more seamless and contextually relevant multimodal interactions. Our model, \modelname, exhibits substantial improvement over the baseline models on multimodal generation datasets, including MMDialog and VIST. The human evaluation shows \modelname is better than the baseline model on more than 57\% cases for multimodal generation, highlighting its efficacy across diverse benchmarks.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Vision and Language, Multimodal Generation
Contribution Types: Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 1115
Loading