Seeing is Believing? Counting Bananas Helps Multimodal Large Language Models Mitigate Modality Bias

ACL ARR 2025 February Submission1709 Authors

14 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multimodal Large Language Models (MLLMs) often encounter irrelevant or misleading images in real-world applications. To handle such challenges, MLLMs must dynamically adjust their reliance on different modalities based on relevance. However, we find that MLLMs disproportionately favor visual inputs, even when textual cues are equally informative. This modality bias leads to imbalanced reasoning and reduced robustness, especially when irrelevant images are present. In this paper, we systematically investigate modality bias by designing a Banana-Counting dataset, where identical information is embedded in both textual and visual formats, ensuring that models have equal access to both modalities. Our findings reveal that most MLLMs prioritize visual information even when textual cues provide equally informative content. To mitigate this bias, we design a balanced multimodal Banana-Counting training dataset and fine-tune MLLMs using LoRA-based adaptation. Our approach significantly reduces modality bias while maintaining or even improving general reasoning performance on datasets such as ScienceQA, CSQA, and MMLU. Additionally, our fine-tuned models demonstrate enhanced robustness against noisy figures, ensuring more reliable performance in real-world multimodal scenarios. Our study highlights the importance of balanced multimodal training strategies and provides insights into improving MLLMs' ability to integrate information effectively across modalities.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: multimodality, cross-modal information extraction
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 1709
Loading