Toward Automatic Safe Driving Instruction: A Large-Scale Vision Language Model Approach

Published: 02 Dec 2025, Last Modified: 23 Dec 2025MMLoSo 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: vision and language, safe driving instruction, multimodal dataset, nlp application
TL;DR: We introduce a safe driving instruction dataset featuring synchronized driver- and road-facing videos and demonstrate that fine-tuned LVLMs can generate safety-aware instructions from synchronized videos, though challenges remain.
Abstract: Large-scale Vision Language Models (LVLMs) exhibit advanced capabilities in tasks that require visual information, including object detection. These capabilities have promising applications in various industrial domains, such as autonomous driving. For example, LVLMs can generate safety-oriented descriptions of videos captured by road-facing cameras. However, ensuring comprehensive safety requires monitoring driver-facing views as well to detect risky events, such as the use of mobiles while driving. Thus, the ability to process synchronized inputs is necessary from both driver-facing and road-facing cameras. In this study, we develop models and investigate the capabilities of LVLMs by constructing a dataset and evaluating their performance on this dataset. Our experimental results demonstrate that while pre-trained LVLMs have limited effectiveness, fine-tuned LVLMs can generate accurate and safety-aware driving instructions. Nonetheless, several challenges remain, particularly in detecting subtle or complex events in the video. Our findings and error analysis provide valuable insights that can contribute to the improvement of LVLM-based systems in this domain.
Submission Number: 5
Loading