Talk2BEV: Language-enhanced Bird's-eye View Maps for Autonomous Driving

Published: 01 Jan 2024, Last Modified: 17 Feb 2025ICRA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This work introduces Talk2BEV, a large vision-language model (LVLM) 1 interface for bird’s-eye view (BEV) maps commonly used in autonomous driving. While existing perception systems for autonomous driving scenarios have largely focused on a pre-defined (closed) set of object categories and driving scenarios, Talk2BEV eliminates the need for BEV-specific training, relying instead on well-performing pre-trained LVLMs. This enables a single system to cater to a variety of autonomous driving tasks encompassing visual and spatial reasoning, predicting the intents of traffic actors, and decision-making based on visual cues. We extensively evaluate Talk2BEV on a large number of scene understanding tasks that rely on both the ability to interpret freeform natural language queries, and in grounding these queries to the visual context embedded into the language-enhanced BEV map. To enable further research in LVLMs for autonomous driving scenarios, we develop and release Talk2BEV-Bench, a benchmark encompassing 1000 human-annotated BEV scenarios, with more than 20,000 questions and ground-truth responses from the NuScenes dataset. We encourage the reader to view the demos on our project page: https://llmbev.github.io/talk2bev/
Loading