Abstract: Notable progress has been made in medical image segmentation models due to the availability of massive training data. Nevertheless, a majority of open-source datasets are only partially labeled, and not all expected organs or tumors are annotated in these images. While previous attempts have been made to only learn segmentation from labeled regions of interest (ROIs), they do not consider the latent classes, i.e., existing but unlabeled ROIs, in the images during the training stage. Moreover, since these methods rely exclusively on labeled ROIs and those unlabeled regions are viewed as background, they need large-scale and diverse datasets to achieve a variety of ROI segmentation. In this paper, we propose a framework that utilizes latent classes for segmentation from partially labeled datasets, aiming to improve segmentation performance, especially for ROIs with only a small number of annotations. Specifically, we first introduce an ROI-aware network to detect the presence of unlabeled ROIs in images and form the latent classes, which are utilized to guide the segmentation learning. Additionally, ROIs with ambiguous existence are constrained by the consistency loss between the predictions of the student and the teacher networks. By regularizing ROIs with different certainty levels under different scenarios, our method can significantly improve the robustness and reliance of segmentation on large-scale datasets. Experimental results on a public benchmark for partially labeled segmentation demonstrate that our proposed method surpasses previous attempts and has great potential to form a large-scale foundation segmentation model.
Loading