Driver Activity Classification Using Generalizable Representations from Vision-Language Models

Published: 22 Apr 2024, Last Modified: 23 Apr 2024VLADR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: vision-language models, autonomous driving safety, driver activity recognition, control transitions
TL;DR: It is possible to learn high-accuracy classification of distracting driver activities using language-driven foundation model embeddings of images, which can be beneficial for safe autonomous driving control transitions.
Abstract: Driver activity classification is crucial for ensuring road safety, with applications ranging from driver assistance systems to autonomous vehicle control transitions. In this paper, we present a novel approach leveraging generalizable representations from vision-language models for driver activity classification. Our method employs a Semantic Representation Late Fusion Neural Network (SRLF-Net) to process synchronized video frames from multiple perspectives. Each frame is encoded using a pretrained vision-language encoder, and the resulting embeddings are fused to generate class probability predictions. By leveraging contrastively-learned vision-language representations, our approach achieves robust performance across diverse driver activities. We evaluate our method on the Naturalistic Driving Action Recognition Dataset, demonstrating strong accuracy across many classes. Our results suggest that vision-language representations offer a promising avenue for driver monitoring systems, providing both accuracy and interpretability through natural language descriptors. We make our code available at [anonymized].
Submission Number: 19
Loading