Keywords: Autonomous driving
TL;DR: We introduce Impromptu VLA, a new 80k-clip dataset of unstructured "corner case" driving scenarios with rich QA annotations, which significantly boosts the safety and planning performance of Vision-Language-Action models.
Abstract: Vision-Language-Action (VLA) models for autonomous driving show promise but falter in unstructured corner case scenarios, largely due to a scarcity of targeted benchmarks. To address this, we introduce Impromptu VLA. Our core contribution is the Impromptu VLA Dataset: over 80,000 meticulously curated video clips, distilled from over 2M source clips sourced from 8 open-source large-scale datasets. This dataset is built upon our novel taxonomy of four challenging unstructured categories and features rich, planning-oriented question-answering annotations and action trajectories.
Crucially, experiments demonstrate that VLAs trained with our dataset achieve substantial performance gains on established benchmarks—improving closed-loop NeuroNCAP scores and collision rates, and reaching near state-of-the-art L2 accuracy in open-loop nuScenes trajectory prediction. Furthermore, our Q&A suite serves as an effective diagnostic, revealing clear VLM improvements in perception, prediction, and planning.
Our code, data and models are available at https://github.com/ahydchh/Impromptu-VLA
Croissant File: json
Dataset URL: https://huggingface.co/datasets/aaaaaap/unstructed
Code URL: https://github.com/ahydchh/Impromptu-VLA
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 1227
Loading