OneSAM: Modality-agnostic for segment anything model in medical images

24 May 2024 (modified: 11 Oct 2024)Submitted to CVPR24 MedSAMonLaptopEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: Segment Anything Model, Adapter, high quality masks, Efficient segmentation learning, Cancer
TL;DR: Efficient SAM for medical images on laptop
Abstract: In the realm of clinical practice, medical image segmentation is crucial for the precise quantification of anatomical structures and pathological regions. The domain is undergoing a significant transformation, transitioning from specialized models designed for specific tasks to versatile foundation models adept at handling various segmentation scenarios. Despite this progress, the majority of current foundation models for segmentation are optimized for natural images and typically require extensive computational resources for inference, which hinders their broad adoption in clinical environments. The current challenge aims to address this by developing universal, promptable medical image segmentation models that can be deployed on standard laptops or edge devices without the need for GPU support. The challenge involves creating a lightweight model based on bounding box segmentation techniques, supported by an extensive training dataset comprising over 1 million image-mask pairs, spanning 10 different medical imaging modalities and encompassing more than 20 types of cancer.
Submission Number: 1
Loading