Abstract: Deep learning models typically render decisions based on probabilistic outputs. However, in safety-critical applications such as environment perception for autonomous vehicles, erroneous decisions made by semantic segmentation models may lead to catastrophic results. Consequently, it would be beneficial if these models could explicitly indicate the reliability of their predictions. Essentially, stakeholders anticipate that deep learning models will convey the degree of uncertainty associated with their decisions. In this paper, we introduce EviSeg, a predictive uncertainty quantification method for semantic segmentation models, based on Dempster-Shafer (DS) theory. Specifically, we extract the discriminative information, i.e., the parameters and the output features from the last convolution layer of a semantic segmentation model. Subsequently, we model this multi-source evidence to the evidential weights, thereby estimating the predictive uncertainty of the semantic segmentation model with the Dempster’s rule of combination. Our proposed method does not require any changes to the model architecture, training process, or loss function. Thus, this uncertainty quantification process does not compromise the model performance. Validated on the urban road scene dataset CamVid, the proposed method enhanced computational efficiency by three to four times compared to the baseline method, while maintaining comparable performance with baseline methods. This improvement is critical for real-time applications.
Loading