Do Vision Language Models infer human intention without visual perspective-taking? Towards a scalable "One-Image-Probe-All" dataset
Keywords: Theory-of-Mind (ToM), Knowledge Grounding, World Model, Multi-Modal Large Language Model, Scalable Benchmark
Abstract: At the core of understanding the knowledge grounding of Multimodal Large Language Models (MLLMs) are two key challenges: (1) ensuring fair comparability across concepts and (2) scaling multimodal datasets to reflect real-world complexity. This paper presents a solution through the Omni-Perspective benchmark, which scales the construction of a 5-level question-context-answers (QCAs) from 1 real-world image. This benchmark pertains to 3 concepts along the Theory-of-Mind (ToM) ability hierarchy in humans and is further divided into 10 fine-grained subdifficulties. Through inference tasks, complexity, and ablation analysis, we evaluate over 2,200 consolidated QCAs on 61 MLLMs. Our findings reveal a key observation: MLLMs follow the human ToM grounding pathway hypothesis with the exception of level-2 perspective taking. Furthermore, this dataset enables nuanced analysis of how such observations change across varying difficulty levels, modalities, distractor logic, and prompt types.
Submission Number: 21
Loading