Uncertainty as a criterion for SOTIF evaluation of deep learning models in autonomous driving systems

Published: 10 Oct 2024, Last Modified: 14 Dec 2024NeurIPS BDU Workshop 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Uncertainty quantification, Autonomous driving, Evaluation criteria, Safety of the intended functionality
TL;DR: This paper proposes an idea using uncertainty as a criterion to enhance the safety of deep learning models in autonomous driving, particularly by taking precaution to potential hazard in unknown scenarios.
Abstract:

Ensuring the safety of deep learning models in autonomous driving systems is crucial. In compliance with the automotive safety standard ISO 21448, we propose uncertainty as a new complementary evaluation criterion to ensure the safety of the intended functionality (SOTIF) of deep learning-based systems. To evaluate and improve the trajectory prediction function of autonomous driving systems, we utilize epistemic uncertainty, quantified by a single forward pass model with consideration for constraints on resources and response time, as a criterion. Experimental results with data collected from the CARLA simulator demonstrate that uncertainty criterion can detect functional insuffiencies in unknown driving scenarios which potentially hazardous, and eventually induce extra learning.

Submission Number: 114
Loading