Keywords: driver awareness, driving assistance, situational awareness
TL;DR: We propose a new protocol to record drivers' situational awareness, use it to collect a dataset, and build a predictive SA model
Abstract: Intelligent driving assistance can alert drivers to objects in their environment; however, such systems require a model of drivers' situational awareness (SA) (what aspects of the scene they are already aware of) to avoid unnecessary alerts.
Moreover, collecting the data to train such an SA model is challenging:
being an internal human cognitive state, driver SA is difficult to measure, and non-verbal signals such as eye gaze are some of the only outward manifestations of it. Traditional methods to obtain SA labels rely on probes that result in sparse, intermittent SA labels unsuitable for modeling a dense, temporally correlated process via machine learning. We propose a novel interactive labeling protocol that captures dense, continuous SA labels and use it to collect an object-level SA dataset in a VR driving simulator. Our dataset comprises 20 unique drivers' SA labels, driving data, and gaze (over 320 minutes of driving) which will be made public.
Additionally, we train an SA model from this data, formulating the object-level driver SA prediction problem as a semantic segmentation problem. Our formulation allows all objects in a scene at a timestep to be processed simultaneously, leveraging global scene context and local gaze-object relationships together.
Our experiments show that this formulation leads to improved performance over common sense baselines and prior art on the SA prediction task.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://harplab.github.io/DriverSA
Publication Agreement: pdf
Student Paper: yes
Submission Number: 227
Loading