Joint image synthesis and fusion with converted features for Alzheimer's disease diagnosis

Published: 01 Jan 2025, Last Modified: 13 Jul 2025Eng. Appl. Artif. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The effectiveness of complete multi-modal neuroimaging data in the diagnosis of Alzheimer’s disease has been extensively demonstrated and applied. Dealing with incomplete modalities poses a common challenge in multi-modal neuroimaging diagnosis. The mainstream approaches aim to synthesize missing neuroimaging data in order to make full use of all available samples. However, these methods treat image synthesis and disease diagnosis as two independent tasks, overlooking the potential feature of cross-modality image synthesis for downstream tasks. To this end, we propose the Joint Image Synthesis and Classification Learning method to jointly optimize image synthesis and disease diagnosis using incomplete neuroimaging modalities. Our approach comprises a submodule for synthesizing missing neuroimaging data and a decision fusion submodule that integrates features from different modalities and the high-level/converted features generated during synthesis. Experimental results demonstrate that our joint optimization approach outperforms conventional two-stage methods. Our method is capable of handling arbitrary neuroimaging modality missing scenarios and achieves state-of-the-art performance in both Alzheimer’s Disease identification and mild cognitive impairment conversion classification tasks. Finally, we further explored the importance of different converted features. This highlights the effectiveness of our approach in addressing the challenges of Alzheimer’s Disease diagnosis and provides insights for future research in multi-modal medical image analysis.
Loading