Keywords: Multimodal, Benchmark, Game, Planning
Abstract: As multimodal large language models (MLLMs) continue to demonstrate increasingly competitive performance across a broad spectrum of tasks, more intricate
and comprehensive benchmarks have been developed to assess these cutting-edge
models. These benchmarks introduce new challenges to core capabilities such
as perception, reasoning, and planning. However, existing multimodal benchmarks fall short in providing a focused evaluation of multi-step planning based
on spatial relationships in images. To bridge this gap, we present ING-VP,
the first INteractive Game-based Vision Planning benchmark, specifically designed to evaluate the spatial imagination and multi-step reasoning abilities of
MLLMs. ING-VP features 6 distinct games, encompassing 300 levels, each with
6 unique configurations. A single model engages in over 60,000 rounds of interaction. The benchmark framework allows for multiple comparison settings,
including image-only vs. text-only inputs, single-step vs. multi-step reasoning,
and with-history vs. without-history conditions, offering valuable insights into
the model’s capabilities. We evaluated numerous state-of-the-art MLLMs, with
the highest-performing model, Claude-3.5 Sonnet, achieving a best accuracy of
only 8.00%, far below the human accuracy of 65.66%. This work aims to provide
a specialized evaluation framework to drive advancements in MLLMs’ capacity
for complex spatial reasoning and planning.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8692
Loading