Reproducibility Analysis: Reproduce the Top One Team Results

31 May 2024 (modified: 11 Oct 2024)Submitted to CVPR24 MedSAMonLaptopEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: Medical image segmentation; Edge AI
TL;DR: We successfully reproduced the Top One team's solution and exceeded the original solution in some modal verification results.
Abstract: Many excellent solutions emerged in the competition. We chose to reproduce the Rank1 solution MedficientSAM, which uses the EfficientViT model to replace the heavy image encoder in SAM and then extracts knowledge from the MedSAM model on the challenge training set. The test results show that we successfully reproduced the Top One Team solution. During the verification process, due to the limitations of our hardware equipment, we reduced the batch size to 1/4 of the original solution while training. Besides, Additional X-Ray validation images were added in the post-challenge phase, resulting in a decline in the model performance. The average DSC and NSD scores of our reproduced scheme on the public validation set are 0.8516 and 0.8668 respectively, slightly lower than the average DSC and NSD scores of the original scheme of 0.8642 and 0.8795. However, we still achieved much better results than the baseline average DSC score of 83.23 and NSD score of 82.71. It also proves the reproducibility of the top One team solution. Our detailed experiment logs, trained weights, and docker are publicly available at: https://github.com/RicoLeehdu/medficientsam-reproduce.
Submission Number: 10
Loading