SpatialTree : How Spatial Abilities Branch Out in MLLMs

Published: 02 Mar 2026, Last Modified: 23 Apr 2026ES-Reasoning @ ICLR 2026 BestPaperEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Spatial Intelligence; Taxonomy; Ability transfer
TL;DR: A hierarchical taxonomy for spatial intelligence to explore how spatial abilities transfer, emerge and could be improved.
Abstract: Cognitive science suggests that spatial ability develops progressively—from perception to reasoning and interaction. Yet in multimodal LLMs (MLLMs), this hierarchy remains poorly understood, as most studies focus on a narrow set of tasks. We introduce SpatialTree, a cognitive-science-inspired hierarchy that organizes spatial abilities into four levels: low-level perception (L1), mental mapping (L2), simulation (L3), and agentic competence (L4). Based on this taxonomy, we construct the first capability-centric hierarchical benchmark, thoroughly evaluating mainstream MLLMs across 27 sub-abilities. The evaluation results reveal a clear structure: L1 skills are largely orthogonal, whereas higher-level skills are strongly correlated, indicating increasing interdependency. Through targeted supervised fine-tuning, we uncover a surprising transfer dynamic—negative transfer within L1, but strong cross-level transfer from low- to high-level abilities with notable synergy. Finally, we explore how to improve the entire hierarchy. We find that naïve RL that encourages extensive “thinking” is unreliable: it helps complex reasoning but hurts intuitive perception. We propose a simple auto-think strategy that suppresses unnecessary deliberation, enabling RL to consistently improve performance across all levels. By building SpatialTree, we provide a proof-of-concept framework for understanding and systematically scaling spatial abilities in MLLMs.
Submission Number: 2
Loading