Abstract: Semi-supervised learning emerges as a viable strategy for addressing the challenge posed by the limited availability of annotated data in medical image segmentation tasks. Nonetheless, current semi-supervised methods for medical image segmentation mainly concentrate on single-modality data. Leveraging multi-modality data effectively to enhance the effectiveness of semi-supervised segmentation methods is a valuable research topic because such data provides additional complementary information. Nevertheless, the majority of current multi-modality semi-supervised segmentation methods are reliant on highly coupled fusion networks, requiring the presence of multi-modality data during both the training and inference phases. It imposes constraints on the deployment of these solutions in clinical practice. To tackle this difficulty, we present UGCM-Semi, a novel multi-modality semi-supervised segmentation framework with an uncertainty-guided distribution calibration (UGDC) student network and a cross-model feature alignment loss. Specifically, we propose the UGDC student network which utilizes the uncertainty modeling technique to quantify and reduce the distribution shift due to labeled and unlabeled data with different modalities. Meanwhile, we add the cross-model feature alignment loss to mitigate the incorrect supervision from the teacher network to the student network caused by the aforementioned distribution shift. Extensive experiments on two public multi-modality MRI datasets demonstrate that UGCN-Semi provides superiority over the state-of-the-art methods.
Loading