Keywords: Monte carlo dropout, probabilistic neural networks
TL;DR: Monte Carlo dropout on neural networks does not reproduce true uncertainties.
Abstract: Reliable uncertainty estimation is crucial for machine learning models, especially in safety-critical domains. While exact Bayesian inference offers a principled approach, it is often computationally infeasible for deep neural networks. Monte Carlo dropout (MCD) was proposed as an efficient approximation to Bayesian inference in deep learning by applying dropout at inference time. Hence, the method generates multiple sub-models yielding a distribution of predictions to estimate uncertainty. We investigate its ability to capture true uncertainty and compare to Gaussian Processes (GP) and Bayesian Neural Networks (BNN). We find that MCD struggles to accurately reflect the underlying true uncertainty, particularly failing to capture increased uncertainty in extrapolation and interpolation regions observed in Bayesian models. The findings suggest that uncertainty estimates from MCD, as implemented and evaluated in these experiments, may not be as reliable as those from traditional Bayesian approaches for capturing epistemic and aleatoric uncertainty.
Serve As Reviewer: ~Alexander_Johannes_Stasik1
Submission Number: 33
Loading