Alleviating Overconfident Failure Predictions via Masking Predictive Logits in Semantic SegmentationOpen Website

2022 (modified: 26 Oct 2022)ICANN (2) 2022Readers: Everyone
Abstract: Currently, semantic segmentation is formulated to a classification task as image classification with similar networks and training settings. We observe an excessive overconfidence phenomenon in semantic segmentation regarding the model’s classification scores. Unlike image classification, segmentation networks yield undue-high predictive probabilities for failure predictions, which may carry severe repercussions in safety-sensitive applications. To this end, we propose manually perturbing the predicted probability distribution via masking predictive logits during training that explicitly enforces the model to re-learn potential patterns, based on the pure intuition that meaningful patterns help alleviate overconfident failure predictions. A direct instantiation is presented that randomly zeroes out the model’s predictive logits but keeps their expectations unchanged before computing the loss in the training phase. This instantiation requires no additional computation cost or customized architectures but only a masking function. Empirical results from various network architectures indicate its feasibility and effectiveness of alleviating overconfident failure predictions in semantic segmentation.
0 Replies

Loading