Uncertainty Quantification in Computer-Aided Diagnosis: Make Your Model say “I don’t know“ for Ambiguous Cases
Keywords: bayesian approximation, variational inference, optical coherence tomography
TL;DR: In this work, we evaluate two different methods for the integration of prediction uncertainty into diag- nostic image classifiers to increase patient safety in deep learning.
Abstract: We evaluate two different methods for the integration of prediction uncertainty into diag-
nostic image classifiers to increase patient safety in deep learning. In the first method,
Monte Carlo sampling is applied with dropout at test time to get a posterior distribution of
the class labels (Bayesian ResNet). The second method extends ResNet to a probabilistic
approach by predicting the parameters of the posterior distribution and sampling the final
result from it (Variational ResNet). The variance of the posterior is used as metric for
uncertainty. Both methods are trained on a data set of optical coherence tomography scans
showing four different retinal conditions. Our results shown that cases in which the classi-
fier predicts incorrectly correlate with a higher uncertainty. Mean uncertainty of incorrectly
diagnosed cases was between 4.6 and 8.1 times higher than mean uncertainty of correctly
diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with
deep learning yields more reliable results and is anticipated to increase patient safety
Code Of Conduct: I have read and accept the code of conduct.
Remove If Rejected: Remove submission from public view if paper is rejected.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/uncertainty-quantification-in-computer-aided/code)
3 Replies
Loading