$\text{BrainM}^3$: A Multi-Task Learning Framework Based on A Multi-Level Mixture- of-Experts for Cross-Disease and Cross-Domain Dementia Diagnosis

ICLR 2026 Conference Submission13876 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Medical Imaging; Multi-task Learning; Mixture of Experts
Abstract: Accurate differential diagnosis of dementia subtypes is crucial due to their distinct clinical trajectories and treatment responses. However, rare subtypes such as Lewy Body Dementia (LBD) suffer from data scarcity, and domain shifts across institutions further hinder model generalization. To address these challenges, we propose $\text{BrainM}^3$, a Multi-task learning framework based on a Multi-level Mixture-of-Experts (MoE) architecture for cross-domain and cross-disease Brain modeling. Our model jointly learns Alzheimer’s disease (AD), mild cognitive impairment (MCI), and LBD diagnosis by disentangling disease-shared and specific brain connectivity features. At the domain level, a domain-aware Soft-MoE combined with adversarial training captures domain-invariant foundation brain representations, effectively mitigating scanner and cohort variability. At the task level, task-shared and task-specific Soft-MoEs enable mutual knowledge transfer and facilitate fine-grained pathological feature modeling. Experiments on multi-institutional datasets demonstrate that $\text{BrainM}^3$ consistently outperforms baselines under data heterogeneity. Moreover, our model offers interpretable insights into disease-relevant brain networks, offering potential clinical utility. Our work highlights the promise of an accurate and interpretable model for robust dementia diagnosis in real-world, cross-institution settings. Our code will be published based on acceptance.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 13876
Loading