Automatic Brain Segmentation for PET/MR Dual-Modal Images Through a Cross-Fusion Mechanism

Published: 01 Jan 2025, Last Modified: 14 May 2025IEEE J. Biomed. Health Informatics 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The precise segmentation of different brain regions and tissues is usually a prerequisite for the detection and diagnosis of various neurological disorders in neuroscience. Considering the abundance of functional and structural dual-modality information for positron emission tomography/magnetic resonance (PET/MR) images, we propose a novel 3D whole-brain segmentation network with a cross-fusion mechanism introduced to obtain 45 brain regions. Specifically, the network processes PET and MR images simultaneously, employing UX-Net and a cross-fusion block for feature extraction and fusion in the encoder. We test our method by comparing it with other deep learning-based methods, including 3DUXNET, SwinUNETR, UNETR, nnFormer, UNet3D, NestedUNet, ResUNet, and VNet. The experimental results demonstrate that the proposed method achieves better segmentation performance in terms of both visual and quantitative evaluation metrics and achieves more precise segmentation in three views while preserving fine details. In particular, the proposed method achieves superior quantitative results, with a Dice coefficient of 85.73% $\pm$ 0.01%, a Jaccard index of 76.68% $\pm$ 0.02%, a sensitivity of 85.00% $\pm$ 0.01%, a precision of 83.26% $\pm$ 0.03% and a Hausdorff distance (HD) of 4.4885 $\pm$ 14.85%. Moreover, the distribution and correlation of the SUV in the volume of interest (VOI) are also evaluated (PCC > 0.9), indicating consistency with the ground truth and the superiority of the proposed method. In future work, we will utilize our whole-brain segmentation method in clinical practice to assist doctors in accurately diagnosing and treating brain diseases.
Loading