Keywords: Transferability of reasoning, pre-training of LLMs, MLLMs
TL;DR: Explore and understand the visual priors (including reasoning) within LLMs and thus build better MLLMs.
Abstract: Large Language Models (LLMs), despite being trained on text alone, surprisingly develop rich visual priors. These priors allow latent visual capabilities to be unlocked for vision tasks with a relatively small amount of multimodal data, and in some cases, to perform visual tasks without ever having seen an image. This paper aims to demystify this phenomenon. Through systematic analysis, we reveal that these priors are not uniform but are composed of separable 'perception' and 'reasoning' priors with unique scaling trends and origins. We show that an LLM's latent visual reasoning ability is predominantly cultivated by pre-training on reasoning-centric data (e.g., code, math, academia) and scales progressively. This reasoning prior acquired from language pre-training is transferable and universally applicable to visual reasoning. In contrast, the perception prior emerges more diffusely from broad corpora, and perception ability is more sensitive to the vision encoder and visual instruction tuning data. In parallel, text describing the visual world proves crucial, though its performance impact saturates rapidly. Leveraging these insights, we propose a data-centric recipe for pre-training vision-aware LLMs. The resulting 7B model trained on this recipe for 1T tokens, demonstrates stronger vision capabilities without compromising language proficiency. Our findings are grounded in over 100 controlled experiments consuming 500,000 GPU-hours, spanning the full MLLM construction pipeline—from LLM pre-training to visual alignment and supervised multimodal fine-tuning—across five model scales, a wide range of data categories and mixtures, and multiple adaptation setups. Together, this work provides a new way of deliberately cultivating visual priors from language pre-training, paving the way for the next generation of multimodal LLMs.
Submission Number: 2
Loading