Abstract: The precise segmentation of lesions in contrast-enhanced ultrasound (CEUS) videos, especially during the peak enhancement phase, is crucial for early breast cancer diagnosis. However, the dynamic contrast patterns and subtle differences in CEUS images challenge traditional methods. To overcome this, we propose the CEUS-SAM network, a deep learning framework leveraging the Segment Anything Model (SAM) for enhanced lesion segmentation. Our approach first trains on conventional ultrasound (US) data, generating segmentation masks as prompts for CEUS images. A key innovation, the Image Fusion Module (IFM), integrates cross-modal and multi-scale features from US and CEUS, improving tissue differentiation and lesion detection. The CEUS-SAM network significantly reduces manual effort with single-point prompts and minimizes inter-observer variability. Using a breast CEUS dataset with 135 video sequences, our method achieves a Dice score of 78.6% and an IoU score of 66.6%. The code and dataset are available at https://github.com/2284650586/CEUS-SAM.
Loading