Keywords: graph learning, brain networks, diffusion model, multimodal fusion
Abstract: Multimodal brain graph fusion enables the integration of structural and functional information from multiple neuroimaging modalities to advance brain graph analysis. However, existing methods struggle to simultaneously capture (1) intra-modal dependencies (modality-specific topological information) and (2) inter-modal correlations (structural-functional coupling information), both of which are essential attributes specific to multimodal brain graph fusion. This limitation leads to inadequate brain structural-functional information fusion, ultimately failing to correctly reflect the true brain organization. To fill this gap, this paper proposes a novel Cross-modal Brain Graph Diffusion (Xdiff) approach. Xdiff presents a dual graph diffusion mechanism with intra- and inter-modal diffusion modules to capture intra-modal dependencies and inter-modal correlations, respectively. During the diffusion processes, we use an energy constraint function to ensure diffusion consistency, thereby enhancing model stability of learning from multimodal brain graphs. Furthermore, we design a prompt-based fusion strategy to flexibly integrate multimodal features for robust fusion. Empirically, Xdiff achieves state-of-the-art performance on three datasets for brain disorder detection tasks, with accuracy improvements of 4.6%, 2.5%, and 5.6%, respectively.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 11939
Loading