Keywords: Synthetic Data Generation, Cross-Modality Translation, Diffusion Models, GANs, MRI, fMRI, Alzheimer’s Disease
TL;DR: We propose a GAN-guided diffusion model that generates missing MRI and fMRI-derived connectivity data, improving fidelity and preserving disease-related patterns.
Abstract: Multimodal brain imaging provides complementary insights into brain structure and function, but its capability is often limited by missing modalities. Traditional imputation and subsampling strategies are computationally simple, but have the risk of introducing bias or discarding valuable samples. Recently, generative models have emerged as powerful alternatives for synthesizing missing modalities. In this study, we introduce a GAN-guided diffusion framework for cross-modality translation, designed to generate both T1-weighted MRI and functional network connectivity (FNC) data. The framework integrates conditional diffusion modeling, adversarial learning, and cycle-consistency, enabling training with both paired and unpaired samples. On Alzheimer’s disease data, our approach outperformed baseline methods, achieving higher peak signal-to-noise ratio (PSNR) (24.95) and structural similarity index measure (SSIM) (0.86) for T1 synthesis, as well as improved correlation with real FNCs (0.65). Furthermore, our results demonstrate that the model captures variability across clinical groups without supervision from diagnostic labels, producing realistic and clinically meaningful synthetic modalities for downstream analysis and biomarker discovery.
Submission Number: 44
Loading