Boosting Out-of-Distribution Image Detection With Epistemic UncertaintyDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 12 May 2023IEEE Access 2022Readers: Everyone
Abstract: Modern deep neural networks are known to generate over-confident class predictions even for unseen samples. However, safety-critical applications are required to understand examples that differ from the training distribution. For example, an autonomous vehicle must return a instant refusal feedback when encountering an unexpected situation. The voice secretary should re-ask the user for a command that was not understood to prevent malfunction. In this paper, we propose an out-of-distribution sample detection algorithm using Uncertainty-based Additive Fast Gradient Sign Method (UA-FGSM), which uses Monte Carlo (MC) dropout during backpropagation. The proposed uncertainty-based method forces in-distribution sample predictions to be more over-confident and out-of-distribution sample predictions to be less over-confident in the pre-trained model. This boosts the discrimination between the in-distribution and out-of-distribution samples. In addition, we further boost this difference by continuously accumulating uncertainty-based gradients. Our method uses the inherent epistemic uncertainty of the pre-trained model. Therefore, the proposed algorithm does not require knowledge of the domain of the in-distribution dataset and works by simple pre-processing of the already trained model without any re-training. We demonstrate its effectiveness using diverse network architectures on various popular image datasets and noisy settings.
0 Replies

Loading