Improving Visual Commonsense in Language Models via Late Fusion of Multiple Image Generation

TMLR Paper4833 Authors

12 May 2025 (modified: 13 Aug 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Commonsense reasoning is fundamentally based on multimodal knowledge. However, Large Language Models (LLMs), trained using textual data only, are limited in their ability to incorporate essential visual information. In contrast, Visual Language Models (VLMs), which excel at visually-oriented tasks, often fail at non-visual tasks such as textual commonsense reasoning. This divergence highlights a critical challenge - the integration of robust visual understanding with foundational text-based reasoning. To this end, we introduce a method aimed at enhancing LLMs’ visual commonsense while maintaining textual modeling and commonsense reasoning performance. Specifically, our method is based on test-time compute scaling. We generate multiple images based on the input text prompt and integrate these into the model’s decision-making process by mixing their prediction probabilities. To facilitate multimodal grounded language modeling, we employ a late-fusion layer that combines the projected visual features with the output of a pre-trained LLM conditioned on text-only. This late-fusion layer enables predictions based on comprehensive image-text knowledge as well as text-only when required. We evaluate our approach using several visual commonsense reasoning tasks together with traditional NLP tasks, including commonsense reasoning and reading comprehension. Our experimental results demonstrate significant superiority over existing baselines. When our method is applied to advanced LLMs (e.g., Llama 3), we observe improvements not only in visual commonsense but also in NLP benchmarks.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Hanwang_Zhang3
Submission Number: 4833
Loading