Unfolding Spatial Cognition: Evaluating Multimodal Models on Visual Simulations

Published: 26 Jan 2026, Last Modified: 02 Mar 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: spatial reasoning; visual reasoning
TL;DR: STARE: a benchmark designed to rigorously evaluate MLLMs on tasks better solved through multi-step visual simulation.
Abstract: Spatial cognition is essential for human intelligence, enabling problem-solving through visual simulations rather than relying solely on verbal reasoning. However, existing AI benchmarks primarily assess verbal reasoning, neglecting the complexities of non-verbal, multi-step visual simulation. We introduce STARE (Spatial Transformations and Reasoning Evaluation), a benchmark designed to evaluate multimodal large language models on tasks better solved through multi-step visual simulation. STARE features ~4K tasks spanning foundational geometric transformations (2D and 3D), integrated spatial reasoning (cube net folding and tangram puzzles), and real-world spatial reasoning (perspective and temporal reasoning), reflecting practical cognitive challenges like object assembly, mechanical diagram interpretation, and everyday spatial navigation. Our evaluations show that models excel at reasoning over simpler 2D transformations, but perform close to random chance on more complex tasks like 3D cube net folding and tangram puzzles that require multi-step visual simulations. Humans achieve near-perfect accuracy but take considerable time (up to 28.0s) on complex tasks, reducing response time by 7.5 seconds on average with intermediate visual simulations. In contrast, models exhibit inconsistent performance gains from visual simulations, improving on most tasks but declining in specific cases like tangram puzzles (GPT-4o, o1) and cube net folding (Claude-3.5, Gemini-2.0 Flash), indicating that models cannot consistently leverage intermediate visual information. Even o3, a strong reasoning model, lags significantly behind human performance across tasks. By evaluating non-verbal visual reasoning beyond conventional text-based benchmarks, STARE highlights critical gaps in current AI spatial capabilities and sets a new standard for assessing spatial intelligence in multimodal models.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 1135
Loading