SocialNav-SUB: Benchmarking VLMs for Scene Understanding in Social Robot Navigation

Published: 08 Aug 2025, Last Modified: 16 Sept 2025CoRL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: social robot navigation, scene understanding, vision-language models, VLM, benchmark
TL;DR: A VLM benchmark for scene understanding of social robot navigation scenarios.
Abstract: Robot navigation in dynamic, human-centered environments requires socially-compliant decisions grounded in robust scene understanding, including spatiotemporal awareness and the ability to interpret human intentions. Recent Vision-Language Models (VLMs) show exhibit promising capabilities such as object recognition, common-sense reasoning, and contextual understanding—that align with the nuanced requirements of social robot navigation. However, it remains unclear whether VLMs can reliably perform the complex spatiotemporal reasoning and intent inference needed for safe and socially compliant robot navigation. In this paper, we introduce the Social Navigation Scene Understanding Benchmark (SocialNav-SUB), a Visual Question Answering (VQA) dataset and benchmark designed to evaluate VLMs for scene understanding in real-world social robot navigation scenarios. SocialNav-SUB provides a unified framework for evaluating VLMs against human and rule-based baselines across VQA tasks requiring spatial, spatiotemporal, and social reasoning in social robot navigation. Through experiments with state-of-the-art VLMs, we find that while the best-performing VLM achieves an encouraging probability of agreeing with human answers, it still underperforms a simpler rule-based approach and human consensus, indicating critical gaps in social scene understanding of current VLMs. Our benchmark sets the stage for further research on foundation models for social robot navigation, offering a framework to explore how VLMs can be tailored to meet real-world social robot navigation needs. We will open source the code and release the benchmark.
Supplementary Material: zip
Spotlight: mp4
Submission Number: 664
Loading