A Safety-Adapted Loss for Pedestrian Detection in Autonomous Driving

Published: 01 Jan 2024, Last Modified: 13 Nov 2024ICRA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In safety-critical domains like autonomous driving (AD), errors by the object detector may endanger pedestrians and other vulnerable road users (VRU). As raw evaluation metrics are not an adequate safety indicator, recent works leverage domain knowledge to identify safety-relevant VRU, and to back-annotate the criticality of the interaction to the object detector. However, those approaches do not consider the safety factor in the deep neural network (DNN) training process. Thus, state-of-the-art DNN penalize all misdetections equally irrespective of their importance for the safe driving task. Hence, to mitigate the occurrence of safety-critical failure cases like false negatives, a safety-aware training strategy is needed to enhance the detection performance for critical pedestrians. In this paper, we propose a novel, safety-adapted loss variation that leverages the estimated per-pedestrian criticality during training. Therefore, we exploit the reachable set-based time-to-collision (TTC RSB ) metric from the motion domain along with distance information to account for the worst-case threat. Our evaluation results using RetinaNet and FCOS on the nuScenes dataset demonstrate that training the models with our safety-adapted loss function mitigates the misdetection of safety-critical pedestrians with robust performance for the general case, i.e., safety-irrelevant pedestrians.
Loading