Semi-Supervised Abdomen Extraction and Organ Segmentation in CT ImagesDownload PDF

04 Aug 2022 (modified: 05 May 2023)MICCAI 2022 Challenge FLARE Withdrawn SubmissionReaders: Everyone
Keywords: Semi-Supervised, 3D U-Net, Abdomen Region Segmentor, Organ Segmentation, Easy Implementation
TL;DR: We propose a two-stage approach that extracts the abdominal region and segments the abdominal organs. To leverage unlabeled data, we use FixMatchSeg, which adapts the semi-supervised classification method, FixMatch, to segmentation tasks.
Abstract: State-of-the-art supervised deep learning models perform well in segmenting abdominal organs from CT scan images when the test dataset has a similar distribution to the training dataset. The lack of generalization of deep learning models means that one still needs to label new data and retrain the models for new datasets in a different environment. However, expert annotation of multiple organs in volumetric scans is time-consuming and often prohibitively expensive. Semi-supervised methods that can leverage unlabeled data together with few labeled data could be an attractive solution, but existing semi-supervised multiple organs semantic segmentation methods do not perform well. Moreover, CT scans containing abdominal organs come with a great variety in the real clinical world with diverse resolutions and fields of view ranging from only the abdominal region to the whole body CT scan. In this paper, we propose a two-stage approach where abdomen region segmentation extracts the abdominal region, which is then fed as input to the abdominal organs segmentation network; both stages use the UNet architecture-based network. To leverage unlabeled data, we use FixMatchSeg, which adapts the semi-supervised classification method, FixMatch, to segmentation tasks. FixMatchSeg uses the standard supervised loss with labeled examples and an unsupervised loss with pseudo labeling and consistency regularization that can leverage unlabeled samples. Our model improved the mean dice score from 31.54% to 52.1% on the validation set when utilizing 2,000 unlabeled training images over the 50 labeled images.
Supplementary Material: zip
2 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview