Dual Modalities of Text: Visual and Textual Generative Pre-Training

ACL ARR 2024 June Submission508 Authors

11 Jun 2024 (modified: 02 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Harnessing visual texts represents a burgeoning frontier in the evolution of language modeling. In this paper, we introduce a novel pre-training framework for a suite of pixel-based autoregressive language models, pre-training on a corpus of over 400 million document images. Our approach is characterized by a dual-modality training regimen, engaging both visual data through next patch prediction with a regression head and/or textual data via next token prediction with a classification head. This study is particularly focused on investigating the synergistic interplay between visual and textual modalities of language. Our comprehensive evaluation across a diverse array of benchmarks reveals that the confluence of visual and textual data substantially augments the efficacy of pixel-based language models. Notably, our findings show that a unidirectional pixel-based model, devoid of textual data during training, can match the performance levels of advanced bidirectional pixel-based models on various language understanding benchmarks. This work highlights the considerable untapped potential of integrating visual and textual information for language modeling purposes. We will release our code, data, and checkpoints to inspire further research advancement.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Language Modeling, Multimodality and Language Grounding to Vision, Robotics and Beyond, Multilingualism and Cross-Lingual NLP
Contribution Types: Publicly available software and/or pre-trained models, Data resources
Languages Studied: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili, Urdu
Submission Number: 508
Loading