Abstract: Multimodal large language models (MLLMs) have recently achieved state-of-the-art performance on tasks ranging from visual question answering to video understanding.
However, existing studies have concentrated mainly on visual–textual misalignment, leaving largely unexplored the MLLMs' ability to preserve an originally correct answer when confronted with misleading information.
We reveal a response uncertainty phenomenon: across nine standard datasets, twelve state-of-the-art open-source MLLMs overturn a previously correct answer in 65\% of cases after receiving a single deceptive cue.
To systematically quantify this vulnerability, we propose a two-stage evaluation pipeline: (1) elicit each model’s original response on unperturbed inputs; (2) inject \emph{explicit} (false-answer hints) and \emph{implicit} (contextual contradictions) misleading instructions, and compute the \emph{misleading rate}—the fraction of correct-to-incorrect flips.
Leveraging the most susceptible examples, we curate the Multimodal Uncertainty Benchmark (MUB), a collection of image–question pairs stratified into low, medium, and high difficulty based on how many of twelve state-of-the-art MLLMs they mislead.
Extensive evaluation on twelve open-source and five closed-source models reveals a high uncertainty: average misleading rates exceed 86\%, with explicit cues over 67.19\% and implicit cues over 80.67\%.
To reduce the misleading rate, we then fine-tune all open-source MLLMs on a compact 2\,000-sample mixed-instruction dataset, reducing misleading rates to 6.97\% (explicit) and 32.77\% (implicit), boosting consistency by nearly 29.37\% on highly deceptive inputs, and slightly improving accuracy on standard benchmarks.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: Response Uncertainty, MLLMs
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Keywords: Response Uncertainty, MLLMs
Submission Number: 825
Loading