Abstract: Semi-supervised medical image segmentation still faces challenges although it is able to obtain better segmentation results using a small amount of labeled data and a large amount of unlabeled data. Despite the progress made by current methods in utilizing unlabeled data, they fail to exploit the full potential of labeled data in terms of improving model performance. In this paper, we propose a semi-supervised segmentation method, DistillMatch, that incorporates knowledge distillation and feature perturbation, which efficiently transfers knowledge between labeled and unlabeled data, thus making full use of the information of labeled data to improve segmentation results. DistillMatch consists of several key components: the Self-Training process based on knowledge distillation and feature perturbation, the Deterministic Knowledge Transfer (DKT) strategy, and the introduction of Teacher Assistant (TA), which work together to optimize model performance. Extensive experiments on two benchmark datasets demonstrate that our method outperforms the current state-of-the-art (SOTA) approaches, especially in terms of edge accuracy and model generalization capabilities. We also show how this performance improvement can be achieved without sacrificing computational efficiency through an effective multi-decoder implementation strategy. These experimental results not only demonstrate the effectiveness of our approach, but also highlight its practical value in medical image segmentation tasks. Code is available at https://github.com/AiEson/DistillMatch.
Loading