Better Language Models Exhibit Higher Visual Alignment

TMLR Paper5603 Authors

12 Aug 2025 (modified: 18 Aug 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: How well do text-only large language models (LLMs) align with the visual world? We present a systematic evaluation of this question by incorporating frozen representations of various language models into a discriminative vision-language framework and measuring zero-shot generalization to unseen concepts. We find that decoder-based models exhibit stronger visual alignment than encoders, even when controlling for model and dataset size. Moreover, language modeling performance correlates with visual generalization, suggesting that advances in unimodal LLMs can simultaneously improve vision models. Leveraging these insights, we propose ShareLock, a lightweight method for fusing frozen vision and language backbones. ShareLock achieves robust performance across tasks while drastically reducing the need for paired data and compute. With just 563k image-caption pairs and under one GPU-hour of training, it reaches 51% accuracy on ImageNet. In cross-lingual settings, ShareLock dramatically outperforms CLIP, achieving 38.7% top-1 accuracy on Chinese image classification versus CLIP’s 1.4%. Code will be released.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Xinlei_Chen1
Submission Number: 5603
Loading