LiteMedSAM with Low-Rank Adaptation and Multi-Box Efficient Inference for Medical Image Segmentation

31 May 2024 (modified: 11 Oct 2024)Submitted to CVPR24 MedSAMonLaptopEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: Medical image segmentation; Segment Anything Model;Low-Rank Adaptation.
Abstract: Medical image segmentation is essential in clinical practice for accurately quantifying anatomical structures and pathological regions. Despite a shift towards foundation models capable of handling various segmentation tasks, current models are often optimized for natural images and require substantial computational resources, limiting their widespread clinical use. In this paper, we analyze the distribution of data across different modalities of medical images in the training dataset of the challenge. We adjust the probabilities of selecting each modality during data loading to alleviate the severe imbalance in modality data and improve the segmentation performance of medical images with limited data in certain modalities. We fine-tune LiteMedSAM, incorporating the low-rank adaptation technique into the multi-head attention and multilayer perceptron of TinyVit. To improve inference speed, we concurrently perform inference with multiple box prompts and utilize the argmax operation to process the outputs of multiple box prompts, thereby enhancing segmentation accuracy
Submission Number: 19
Loading