Keywords: Multimodal Large Language Models, Vision Language Models, Spatial Reasoning
Abstract: The ability to understand and reason about spatial relationships between objects in images is an important component of visual reasoning. This skill rests on recognizing and localizing objects of interest and determining their spatial relation. Early vision and language models (VLMs) have been shown to struggle to recognize spatial relations. We extend the previously released What'sUp dataset and propose a novel comprehensive evaluation for spatial relationship understanding that highlights the strengths and weaknesses of 9 Multimodal LLMs (MLLMs), in comparison with the 18 VLMs tested in What'sUp dataset. Our experiments encompass three classes of MLLMs that vary in their parameter sizes (ranging from 7B to 110B), training/instruction-tuning methods, and visual resolution to benchmark their performances and scrutinize the scaling laws in this task.
Submission Number: 10
Loading