Keywords: Medical Image Translation, PaPaGAN, Partially Paired Generative Adversarial Network
TL;DR: Translation of partially paired data of MRI and PET Scans.
Abstract: The integration of paired medical MRI (Magnetic Resonance Imaging) and PET (Positron Emission Tomography) images holds considerable significance in clinical evaluations and offers a richer source of clinical insights.
However, acquiring paired MRI-PET images poses challenges due to the various practical constraints.
To address this, MRI-PET translation emerges as a valuable approach, enabling professionals to obtain complementary information from one modality and enhance decision-making using only single-modality images.
Existing approaches predominantly rely on either using paired MRI-PET images for training or treating the entire dataset as unpaired.
In this study, we introduce PaPaGAN, an innovative end-to-end Partially Paired Generative Adversarial Network specifically tailored for partially paired images. In a practical setting, where a mix of paired and unpaired data is available, PaPaGAN leverages the unpaired data to learn a mapping function capable of generating a noisy intermediate image. To refine this intermediate image and address the inconsistencies during the unpaired translation process, PaPaGAN employs a secondary image translation module. This module is specifically trained using the paired data, which provides a consistent mapping from source to target domain images. By effectively harnessing both paired and unpaired MRI-PET images, our method significantly enhances translation capabilities, facilitating precise image translation and elevating image quality for the target modality. Our quantitative and qualitative medical image translation experiments on two public datasets, ADNI and OASIS, demonstrate the superiority of PaPaGAN over alternative image translation methods.
Track: 5. Biomedical generative AI
Registration Id: DYNQXGV8WBN
Submission Number: 22
Loading