doScenes: An Autonomous Driving Dataset with Natural Language Instruction for Human Interaction and Vision-Language Navigation

Published: 09 Dec 2024, Last Modified: 05 Mar 2025OpenReview Archive Direct UploadEveryoneCC BY-NC-ND 4.0
Abstract: Abstract—Human-interactive robotic systems, particularly au- tonomous vehicles (AVs), must effectively integrate human in- structions into their motion planning. This paper introduces doScenes, a novel dataset designed to facilitate research on human-vehicle instruction interactions, focusing on short-term directives that directly influence vehicle motion. By annotating multimodal sensor data with natural language instructions and referentiality tags, doScenes bridges the gap between instruction and driving response, enabling context-aware and adaptive plan- ning. Unlike existing datasets that focus on ranking or scene- level reasoning, doScenes emphasizes actionable directives tied to static and dynamic scene objects. This framework addresses limitations in prior research, such as reliance on simulated data or predefined action sets, by supporting nuanced and flexible responses in real-world scenarios. This work lays the foundation for developing learning strategies that seamlessly integrate hu- man instructions into autonomous systems, advancing safe and effective human-vehicle collaboration. We make our data publicly available at https://www.github.com/rossgreer/doScenes
Loading