Track: Regular papers (within 8 pages excluding appendix)
Keywords: Benchmarking, Evaluation, Multimodality, Vision Question Answering, Math Problem Solving, Thinking with Images
TL;DR: VisAidMath, the first benchmark for visual-aided math, uncovers a 'reasoning illusion' as LMMs get high scores while failing to generate the visual aids needed to solve.
Abstract: A hallmark of advanced artificial intelligence is the capacity to progress from passive visual perception to the strategic modification of visual information to facilitate complex reasoning. This advanced capability, however, remains critically underdeveloped in current Large Multi-modal Models (LMMs). The deficiency is often masked by evaluation metrics that prioritize final-answer accuracy, creating an illusion of competence where genuine reasoning is absent. Using the domain of geometric problem-solving as a precise instrument, we probe this issue through tasks that require constructing visual aids. To this end, we introduce \textbf{VisAidMath}, a challenging benchmark, and our novel Three-Layered Funnel Evaluation Framework. This framework moves beyond simple accuracy (ACCU) to scrutinize the generation of valid visual aids (PVA) and the soundness of subsequent reasoning steps (SPRS). Our extensive experiments on state-of-the-art models, including Doubao-Seed-1.6 and o4, reveal a profound ``Reasoning Illusion''. We observe that high surface-level accuracy conceals a catastrophic failure in the models' ability to produce valid visual aids or to reason from them. Our findings expose a fundamental schism between visual perception and logical deduction in modern LMMs. We will host an evaluation platform at CodaBench for testing publicly.
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Submission Number: 15
Loading