MMMU-Pro: A More Robust Multi-discipline Multimodal Understanding Benchmark

ACL ARR 2025 February Submission1417 Authors

13 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper introduces MMMU-Pro, a robust version of the Massive Multi-discipline Multimodal Understanding and Reasoning (MMMU) benchmark. MMMU-Pro rigorously assesses multimodal models' true understanding and reasoning capabilities through a three-step process based on MMMU: (1) filtering out questions answerable by text-only models, (2) augmenting candidate options, and (3) introducing a vision-only input setting where questions are embedded within images. This setting challenges AI to truly "see" and "read" simultaneously, testing \textit{a core human cognitive skill of seamlessly integrating visual and textual information}. Results show that model performance is substantially lower on MMMU-Pro than on MMMU, ranging from 16.8% to 26.9% across models. We explore the impact of OCR prompts and Chain of Thought (CoT) reasoning, finding that OCR prompts have minimal effect while CoT generally improves performance. MMMU-Pro provides a more rigorous evaluation tool, closely mimicking real-world scenarios and offering valuable directions for future multimodal research.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Evaluation, Multimodal Understanding, Multimodal LLMs
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1417
Loading