From Quantifying to Reducing Uncertainty: Diffusion Hypernetworks for Robust Medical Image Reconstruction
Keywords: uncertainty, hypernetworks, medical imaging, machine learning
TL;DR: We propose and implement methods to reduce aleatoric and epistemic uncertainty for AI reconstruction models for medical images.
Abstract: Accelerated medical imaging is widely used for reduced scan time and exposure to radiation, improving patient experience. However, sparse-view CT and accelerated MRI produce reconstructions that suffer from both aleatoric (acquisition noises, undersampling, patient motions) and epistemic (model uncertainty) variability. Prior work has focused on quantifying uncertainty, but reporting it alone does not improve the robustness of reconstructed images. We introduce a diffusion-based reconstruction framework with a Bayesian hypernetwork that explicitly reduces uncertainty rather than merely estimating it. Two complementary learning objectives target the distinct sources: noise-consistency to reduce aleatoric uncertainty and weight-consistency to reduce epistemic uncertainty. Trained in separate phases to avoid interference, these learning objectives produce reconstructions that are both high-quality and reliable. Experiments on sparse-view CT (LUNA16) and accelerated MRI (fastMRI Knee and Brain) show substantial reductions in both uncertainty components without degrading image quality, and consistent gains in downstream lung nodule segmentation and pathology classification performance. By shifting uncertainty from a diagnostic overlay to an optimization target, our method produces reconstructions that are anatomically accurate and clinically useful, advancing uncertainty-aware generative modeling for medical imaging.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 22223
Loading