How to select slices for annotation to train best-performing deep learning segmentation models for cross-sectional medical images?
Keywords: Annotation, Semantic segmentation, Cross-sectional Medical Image Analysis
TL;DR: An experimental study on aspects affecting the efficacy of sparse annotations in training deep segmentation models for cross-sectional medical images
Abstract: Automated segmentation of medical images heavily relies on the availability of precise manual annotations. However, generating these annotations is often time-consuming, expensive, and sometimes requires specialized expertise (especially for cross-sectional medical images). Therefore, it is essential to optimize the use of annotation resources to ensure efficiency and effectiveness. In this paper, we systematically address the question: "in a non-interactive annotation pipeline, how should slices from cross-sectional medical images be selected for annotation to maximize the performance of the resulting deep learning segmentation models?"
We conducted experiments on 4 medical imaging segmentation tasks with varying annotation budgets, numbers of annotated cases, numbers of annotated slices per volume, slice selection techniques, and mask interpolations.
We found that:
1) It is almost always preferable to annotate fewer slices per volume and more volumes given an annotation budget.
2) Selecting slices for annotation by unsupervised active learning (UAL) is not superior to selecting slices randomly or at fixed intervals, provided that each volume is allocated the same number of annotated slices.
3) Interpolating masks between annotated slices rarely enhances model performance, with exceptions of some specific configuration for 3D models.
Primary Subject Area: Segmentation
Secondary Subject Area: Validation Study
Paper Type: Validation or Application
Registration Requirement: Yes
Latex Code: zip
Copyright Form: pdf
Submission Number: 146
Loading