Interpretation of Human Behavior from Multi-modal Brain MRI Images based on Graph Deep Neural Networks and Attention Mechanism

Published: 01 Jan 2021, Last Modified: 12 Nov 2025VISIGRAPP (4: VISAPP) 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Interpretation of human behavior by exploiting the complementarity of the information offered by multimodal functional magnetic resonance imaging (fMRI) data is a challenging task. In this paper, we propose to fuse task-fMRI for brain activation and rest-fMRI for functional connectivity with the incorporation of structural MRI (sMRI) as an adjacency matrix to maintain the rich spatial structure between voxels of the brain. We consider then the structural-functional brain connections (3D mesh) as a graph. The aim is to quantify each subject’s performance in voice recognition and identification. More specifically, we propose an advanced multi-view graph auto-encoder based on the attention mechanism called MGATE, which seeks at learning better representation from both modalities task- and rest-fMRI using the Brain Adjacency Graph (BAG), which is constructed based on sMRI. It yields a multi-view representation learned at all vertices of the brain, which be used as input to our trace regr
Loading