MME-Reasoning: A Comprehensive Benchmark for Logical Reasoning in MLLMs

18 Sept 2025 (modified: 30 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal reasoning, evaluation
Abstract: Logical reasoning is a fundamental aspect of human intelligence and an essential capability for multimodal large language models (MLLMs). Despite the significant advancement in multimodal reasoning, existing benchmarks fail to comprehensively evaluate their reasoning abilities due to the lack of explicit categorization for logical reasoning types and an unclear understanding of reasoning. To address these issues, we introduce **MME-Reasoning**, a comprehensive benchmark designed to evaluate the reasoning ability of MLLMs, which covers all three types of reasoning (*i.e.*, inductive, deductive, and abductive). We carefully curate the data to ensure that each question effectively evaluates reasoning ability rather than perceptual skills or knowledge breadth, and extend the evaluation protocols to cover the evaluation of diverse questions. Our evaluation reveals substantial limitations of SoTA MLLMs when subjected to holistic assessments of logical reasoning capabilities. Even the most advanced MLLMs show limited performance in comprehensive logical reasoning, with notable performance imbalances across reasoning types. In addition, we conducted an in-depth analysis of approaches such as ``thinking mode'' and Rule-based RL, which are commonly believed to enhance reasoning abilities. We hope the community can pay more attention to the comprehensive reasoning capabilities of MLLMs instead of only focusing on its subset, such as Math.
Primary Area: datasets and benchmarks
Submission Number: 10289
Loading