OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models

ICLR 2026 Conference Submission8638 Authors

17 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Spatial Reasoning, Vision-Language Models, Benchmark
TL;DR: OmniSpatial is a large-scale comprehensive benchmark that reveals persistent gaps in VLMs’ spatial reasoning across dynamic, logical, interaction, and perspective-taking tasks, while PointGraph and SpatialCoT offer promising improvements.
Abstract: Spatial reasoning is a key aspect of cognitive psychology and remains a bottleneck for current vision-language models (VLMs). While extensive research has aimed to evaluate or improve VLMs' understanding of basic spatial relations, such as distinguishing left from right, near from far, and object counting, these tasks cover only the most elementary layer of spatial reasoning and are largely approaching saturation in the latest reasoning models. In this work, we introduce OmniSpatial, a comprehensive and challenging benchmark for spatial reasoning, grounded in cognitive psychology. OmniSpatial covers four major categories: dynamic reasoning, complex spatial logic, spatial interaction, and perspective-taking, with 50 fine-grained subcategories. Through careful manual annotation, we construct over 8.4K question-answer pairs. Extensive experiments show that both open- and closed-source VLMs exhibit significant limitations in comprehensive spatial reasoning. We also explore two strategies—PointGraph (explicit scene graph cues) and SpatialCoT (novel-view chain-of-thought)—to bolster spatial reasoning.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 8638
Loading