Anytime, Anywhere, Anyone: Investigating the Feasibility of Segment Anything Model for Crowd-Sourcing Medical Image Annotations

Published: 27 Apr 2024, Last Modified: 14 May 2024MIDL 2024 Short PapersEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Data Annotation, Segment Anything Model, Foundation Model
Abstract: Curating annotations for medical image segmentation is a labor-intensive task that requires domain expertise, resulting in "narrowly" focused deep learning (DL) models with limited translational utility. We explore the potential of the Segment Anything Model (SAM) for crowd-sourcing "sparse" annotations from non-experts to generate "dense" segmentation masks for training 3D nnU-Net models. Our results indicate that while SAM-generated annotations exhibit high mean Dice scores compared to ground-truth annotations, SAM nnU-Net models perform significantly worse than ground-truth nnU-Net models.
Submission Number: 18
Loading