On the performance of uncertainty estimation methods for deep-learning based image classification models

Abstract: Previous works have shown that modern neural networks tend to be overconfident; thus, for deep learning models to be trusted and adopted in critical applications, reliable uncertainty estimation (UE) is essential. However, many questions are still open regarding how to fairly compare UE methods. This work focuses on the task of selective classification and proposes a methodology where the predictions of the underlying model are kept fixed and only the UE method is allowed to vary. Experiments are performed for convolutional neural networks using Deep Ensembles and Monte Carlo Dropout. Surprisingly, our results show that the conventional softmax response can outperform most other UE methods for a large part of the risk-coverage curve.
0 Replies
Loading