X$^{2}$2-VLM: All-in-One Pre-Trained Model for Vision-Language Tasks

Published: 01 Jan 2024, Last Modified: 20 Oct 2024IEEE Trans. Pattern Anal. Mach. Intell. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Vision language pre-training aims to learn alignments between vision and language from a large amount of data. Most existing methods only learn image-text alignments. Some others utilize pre-trained object detectors to leverage vision language alignments at the object level. In this paper, we propose to learn multi-grained vision language alignments by a unified pre-training framework that learns multi-grained aligning and multi-grained localization simultaneously. Based on it, we present X $^{2}$ -VLM, an all-in-one model with a flexible modular architecture, in which we further unify image-text pre-training and video-text pre-training in one model. X $^{2}$ -VLM is able to learn unlimited visual concepts associated with diverse text descriptions. Experiment results show that X $^{2}$ -VLM performs the best on base and large scale for both image-text and video-text tasks, making a good trade-off between performance and model scale. Moreover, we show that the modular design of X $^{2}$ -VLM results in high transferability for it to be utilized in any language or domain. For example, by simply replacing the text encoder with XLM-R, X $^{2}$ -VLM outperforms state-of-the-art multilingual multi-modal pre-trained models without any multilingual pre-training.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview