Bridging the Missing-Modality Gap: Improving Text-Only Calibration of Vision Language Models
Keywords: Vision Language Models, Uncertainty Calibration, Missing Modality
TL;DR: This paper shows that VLMs can become severely overconfident when applied to text-only inputs, and proposes a lightweight latent-imagination module that reconstructs pseudo-visual embeddings from text to restore both accuracy and calibration.
Abstract: Vision-language models (VLMs) are often deployed on text-only inputs, although they are trained with images. We find that removing the vision modality causes large drops in accuracy and severe miscalibration, and the model does not behave like its original language backbone under text-only prompting. This failure is not explained only by missing semantic information. Even when text descriptions preserve key content, confidence becomes unreliable, while adding a visual signal through generated images partially restores accuracy and calibration. We propose the Latent Imagination Module (LIM), a lightweight cross-attention module that predicts imagined latent embeddings from textual input and feeds them into a frozen VLM backbone without pixel-level image synthesis. Across text-only benchmarks, unseen tasks, and missing-image scenarios, LIM improves accuracy and reduces calibration error. These results suggest that latent modality completion is a practical approach for reliable VLM inference under missing-modality.
Submission Number: 236
Loading