Uncertainty-Aware Classification: A Human-Guided Bayesian Deep Learning Framework

17 Sept 2025 (modified: 17 Sept 2025)MICCAI 2025 Workshop UNSURE SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Uncertainty · Model calibration · Trustworthy AI
Abstract: While neural networks achieve strong performance in medical image analysis, effectively combining their predictions with human expertise remains a critical challenge for clinical deployment. We examine how different choices of stochastic parameter subsets used in approximate Bayesian inference impact the posterior predictive distributions and, consequently, the performance of a combined human-AI decision model. Using two medical classification tasks, we analyze the relationship between the resulting model and human uncertainty. We demonstrate that uncertainty estimates correlate differently with human uncertainty depending on the stochastic subsets. Building on these findings, we propose a framework that optimizes the choice of stochastic subsets to improve a final decision model that considers human uncertainty, enabling more reliable and interpretable integration of human and AI predictions in clinical settings. Our implementation is publicly available at https://github.com/mkreimann/uncertainty-guided-classification
Submission Number: 19
Loading