Abstract: Despite strong performance in visual understanding and language-based reasoning, Vision-Language Models (VLMs) struggle with tasks requiring integrated perception and symbolic computation. We study this limitation through visual equation solving, where mathematical equations are embedded in images, variables are represented by object icons, and coefficients must be inferred by counting. While VLMs perform well on textual equations, they fail on visually grounded counterparts. To understand this gap, we decompose the task into coefficient counting and variable recognition, and find that counting is the primary bottleneck, even when recognition is accurate. We also observe that composing recognition and reasoning introduces additional errors, highlighting challenges in multistep visual reasoning. Finally, as equation complexity increases, symbolic reasoning itself becomes a limiting factor. These findings reveal key weaknesses in current VLMs and point toward future improvements in visually grounded mathematical reasoning.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: vision question answering,math QA,benchmarking,evaluation
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Keywords: Vision-Language Models, visual reasoning, visual arithmetic, visual grounding, math reasoning
Submission Number: 5151
Loading