DuEU-Net: Dual Encoder UNet with Modality-Agnostic Training for PET-CT Multi-modal Organ and Lesion Segmentation
Abstract: Multimodal PET-CT segmentation plays a crucial role in medical image analysis, offering vital localization and quantification of tumors and organs. However, automatic segmentation of multimodal medical images remains a significant challenging. In this study, we developed a deep learning-based segmentation model for PET-CT that can simultaneously segment organs and tumor. For the PET and CT, we design dual encoders separately to comprehensively capture the features of both modalities, and then the multimodal features are input to a shared decoder. Additionally, to address the challenge of limited PET-CT data, we developed a model capable of generating PET images from CT scans. This approach allows us to include CT-only datasets in the training process, thereby enhancing the model’s generalization and performance. Experimental evaluations on publicly available datasets demonstrate the superiority of our method over benchmark approaches. In addition, we also test the generalization ability of our model on an internal breast cancer dataset. Our code is available at https://github.com/MD7sjh/DuEU-Net.
Loading