Abstract: For medical image segmentation tasks, some of foreground objects have more ambiguities than other areas because of confusing appearances. It is critical to seek a proper method to measure such ambiguity of each pixel and use it for robust model training. To this end, we design a Bayesian uncertainty estimate layer, and propose an uncertainty-guided training for standard convolutional segmentation models. In particular, the proposed Bayesian uncertainty estimate layer provides the confidence on each pixel's prediction independently, and works with prediction correctness to obtain the rescaling weights of training loss for each pixel. Through this mechanism, the learning importance of the regions with different ambiguities can be distinguished. We validate our proposal by comparing it with other loss rescaling approaches on medical image datasets. The results consistently show that the uncertainty-guided training brings significant improvement on lesion segmentation accuracy.
Loading