Abstract: Visual perspective-taking (VPT), the ability to understand the viewpoint of another
person, enables individuals to anticipate the actions of other people. For instance, a
driver can avoid accidents by assessing what pedestrians see. Humans typically
develop this skill in early childhood, but it remains unclear whether the recently
emerging Vision Language Models (VLMs) possess such capability. Furthermore,
as these models are increasingly deployed in the real world, understanding how
they perform nuanced tasks like VPT becomes essential. In this paper, we introduce
two manually curated datasets, Isle-Bricks and Isle-Dots for testing VPT skills, and
we use it to evaluate 12 commonly used VLMs. Across all models, we observe a
significant performance drop when perspective-taking is required. Additionally, we
find performance in object detection tasks is poorly correlated with performance
on VPT tasks, suggesting that the existing benchmarks might not be sufficient to
understand this problem. The code and the dataset will be available at this URL.
Loading