MAGF: A Statistically-Grounded Attention Mechanism for Multimodal Fusion in Early Alzheimer’s Detection
Keywords: Multimodal Fusion, Alzheimer’s Disease, Interpretable AI, Attention Mechanism, Representation Learning, Clinical Decision Support.
Abstract: Early detection of Alzheimer's Disease (AD) is critical, yet diagnostics often rely on costly neuroimaging. We propose the Multi-View Attention-Guided Multimodal Fusion (MAGF) framework, a deep learning model for early AD detection using four accessible clinical data modalities: cognitive assessments, cerebrospinal fluid (CSF) biomarkers, genetic factors, and demographics. The core of our framework is a novel, parameter-free attention mechanism grounded in the coefficient of variation (CV), which posits that a modality's diagnostic utility is inversely proportional to its relative data dispersion. This approach creates an inherently interpretable fusion process where a modality's contribution is a direct function of its statistical reliability. Evaluated on a large ADNI cohort ($N=1,641$), MAGF achieves 89.7\% accuracy and a 0.92 AUC, significantly outperforming standard multi-head attention by an 11.3\% absolute accuracy gain. Our work presents a transparent and statistically principled framework that addresses the need for trustworthy AI and mirrors the holistic reasoning of clinicians, paving the way for wider clinical deployment.
Submission Number: 8
Loading