When, Where, and What? A Benchmark for Accident Anticipation and Localization with Large Language Models
Abstract: As autonomous driving systems increasingly become part of daily transportation, the ability to accurately anticipate and mitigate potential traffic accidents is paramount. Traditional accident anticipation models primarily utilizing dashcam videos are adept at predicting when an accident may occur but fall short in localizing the incident and identifying involved entities. Addressing this gap, this study introduces a novel framework that integrates Large Language Models (LLMs) to enhance predictive capabilities across multiple dimensions—what, when, and where accidents might occur. We develop an innovative chain-based attention mechanism that dynamically adjusts to prioritize high-risk elements within complex driving scenes. This mechanism is complemented by a three-stage model that processes outputs from smaller models into detailed multimodal inputs for LLMs, thus enabling a more nuanced understanding of traffic dynamics. Empirical validation on the DAD, CCD, and A3D datasets demonstrates superior performance in Average Precision (AP) and Mean Time-To-Accident (mTTA), establishing new benchmarks for accident prediction technology. Our approach not only advances the technological framework for autonomous driving safety but also enhances human-AI interaction, making the predictive insights generated by autonomous systems more intuitive and actionable.
Primary Subject Area: [Content] Media Interpretation
Relevance To Conference: For safe autonomous driving, our research introduces a comprehensive framework that leverages Large Language Models (LLMs) to enhance the predictive capabilities of autonomous driving systems. By integrating cutting-edge linguistic and cognitive technologies, our approach not only predicts potential incidents more accurately but also improves the interaction between human operators and AI-driven systems, providing a richer, more intuitive user experience. Our key contributions are:
1) We have expanded the traditional scope of Accident Anticipation (What and When) to include the localization of objects involved in potential accidents (Where), a task we refer to as Accident Localization. For the first time, we utilize LLMs to analyze complex scene semantics, offering precise and timely accident alerts to passengers. Our system predicts whether an accident will occur (What), when it might happen (When), and where it would occur (Where), thereby filling a crucial gap in accident prevention and enhancing the safety of autonomous driving.
2) We introduce a novel chain-based attention mechanism that iteratively refines feature representations through a dynamic routing mechanism enhanced by Markov chain noise models. This process allows our system to dynamically adjust attention weights across various objects within multi-agent traffic scenes, prioritizing those with higher risk levels. This attention mechanism is part of a three-stage model that preprocesses outputs from smaller models to generate multimodal inputs (image and text) for large models, guiding these large models to provide more accurate and detailed scene descriptions.
3) Our model has undergone rigorous testing on benchmark datasets such as DAD, CCD, and A3D, where it has demonstrated superior performance in key metrics like Average Precision (AP) and Mean Time-To-Accident (mTTA). The results not only surpass existing methodologies but also mark a significant advancement in accident prediction technology, setting new standards for the field.
Supplementary Material: zip
Submission Number: 3398
Loading