MGDR: Multi-modal Graph Disentangled Representation for Brain Disease Prediction

Published: 01 Jan 2024, Last Modified: 02 Aug 2025MICCAI (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the task of disease prediction, medical data with different modalities can provide much complementary information for disease diagnosis. However, existing multi-modal learning methods often tend to focus on learning shared representation across modalities for disease diagnosis, without fully exploiting the complementary information from multiple modalities. To overcome this limitation, in this paper, we propose a novel Multi-modal Graph Disentangled Representation (MGDR) approach for brain disease prediction problem. Specifically, we first construct a specific modality graph for each modality data and employ Graph Convolutional Network (GCN) to learn node representations. Then, we learn the common information across different modalities and private information of each modality by developing a disentangled representation of modalities model. Moreover, to remove the possible noise from the private information, we employ a contrastive learning module to learn more compact representation of private information for each modality. Also, a new Multi-modal Perception Attention (MPA) module is employed to integrate feature representations of multiple private information. Finally, we integrate both common and private information together for disease prediction. Experiments on both ABIDE and TADPOLE datasets demonstrate that our MGDR method achieves the best performance when compared with some recent advanced methods.
Loading