SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards

Published: 23 Sept 2025, Last Modified: 19 Nov 2025SpaVLE OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal Large Language Models, RL, Spatial Reasoning
TL;DR: We introduce SpatialThinker, a 3D-aware reasoning MLLM trained with dense spatial rewards via RL on 7K synthetic VQA samples (STVQA-7K, released with this work). SpatialThinker achieves 2x the gains of vanilla RL and surpasses GPT-4o on several tasks
Abstract: Multimodal large language models (MLLMs) have achieved remarkable progress in vision–language tasks, but they continue to struggle with spatial understanding. Existing spatial MLLMs often rely on explicit 3D inputs or architecture-specific modifications, and remain constrained by large-scale datasets or sparse supervision. To address these limitations, we introduce SpatialThinker, a 3D-aware MLLM trained with RL to integrate structured spatial grounding with multi-step reasoning. The model simulates human-like spatial perception by constructing a scene graph of task-relevant objects and spatial relations, and reasoning towards an answer via dense spatial rewards. SpatialThinker consists of two key contributions: (1) a data synthesis pipeline that generates STVQA-7K, a high-quality spatial VQA dataset, and (2) online RL with a multi-objective dense spatial reward enforcing spatial grounding. SpatialThinker-7B outperforms supervised fine-tuning and the sparse RL baseline on spatial understanding and real-world VQA benchmarks, nearly doubling the base-model gain compared to sparse RL, and surpassing GPT-4o. These results showcase the effectiveness of combining spatial supervision with reward-aligned reasoning in enabling robust 3D spatial understanding with limited data and advancing MLLMs towards human-level visual reasoning.
Submission Type: Short Research Paper (< 4 Pages)
Submission Number: 69
Loading