Keywords: MLLM, Spatial Understanding, VLM
TL;DR: Training- and GPU-free Spatial Prompting for Multimodal Large Language Models
Abstract: We introduce See&Trek, the first training-free prompting framework tailored to enhance the spatial understanding of Multimodal Large Language Models (MLLMs) under vision-only constraints. While prior efforts have incorporated modalities like depth or point clouds to improve spatial reasoning, purely visual-spatial understanding remains underexplored. See&Trek addresses this gap by focusing on two core principles: increasing visual diversity and motion reconstruction. For visual diversity, we conduct Maximum Semantic Richness Sampling, which employs an off-the-shell perception model to extract semantically rich keyframes that capture scene structure. For motion reconstruction, we simulate visual trajectories and encode relative spatial positions into keyframes to preserve both spatial relations and temporal coherence. Our method is training&GPU-free, requiring only a single forward pass, and can be seamlessly integrated into existing MLLMs. Extensive experiments on the VSI-Bench and STI-Bench show that See&Trek consistently boosts various MLLMs performance across diverse spatial reasoning tasks with the most +3.5% improvement, offering a promising path toward stronger spatial intelligence.
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 5923
Loading