SPromptGL: Semantic Prompt Guided Graph Learning for Multi-modal Brain Disease

Published: 2025, Last Modified: 12 Nov 2025MICCAI (12) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multi-modal brain disease diagnosis provides a more robust and comprehensive prediction of diverse diseases by integrating medical data from different modalities. However, recent methods generally fail to account for the modality-specific discriminant regions in semantic information, which causes models to focus on non-lesion areas while neglecting the actual lesion regions. To address this issue, we propose Semantic Prompt-guided Graph Learning (SPromptGL), a novel approach for multi-modal disease prediction that captures the discriminative regions of different modalities while enhancing their interaction and fusion. Firstly, to explore the relationship between subjects of different modalities, we propose constructing an interactively multi-relation graph for multi-modal data. It is dynamically learned by designing graph learning loss terms. The multi-layer graph convolutional neural network is utilized to learn context-enriched representations for each subject. Then, to better capture the significant region representations of different modalities, we propose a semantic prompt-guided learning network to excavate the modality-specific lesion regions of related diseases. Specifically, a set of semantic prompts of related brain diseases is first guided to capture fine-grained local details to enhance patch representation. And then we couple with a relation-aware embedding strategy to refine discriminative features. Compared with state-of-the-art methods, our approach achieves superior performance on different benchmark datasets. Code is available at https://github.com/wanxixi11/SPromptGL.
Loading