Federated Brain Tumour Segmentation using Multi-Modal Information Fusion

15 Sept 2025 (modified: 10 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated learning, Multi-modal, Brain tumor, Attention
Abstract: Federated learning applications to privacy-sensitive domain involving medical data is one of the important directions of research in recent times. We introduce a federated framework for multi‑modal brain tumour segmentation that integrates hierarchical cross‑modal fusion with end‑to‑end distributed training under simulated inter‑site heterogeneity. Our architecture couples a shared encoder with correlation‑aware, attention‑based fusion across MRI modalities (T1, T1CE, T2, FLAIR) and multi‑scale deep supervision, addressing a key gap in multimodal fusion under federated constraints where modality and site distributions are non‑IID and communication budgets are limited. To accelerate convergence and reduce communication, we initialize the federated backbone from a pretrained network, a strategy known to stabilize optimization and decrease rounds to target accuracy in FL. Empirically, the proposed method delivers consistent gains on Enhancing Tumor (ET), Tumor Core (TC) and Mean Dice over strong centralized baselines, while achieving comparable performance on Whole Tumor (WT), indicating that cross‑modal reasoning primarily benefits the more challenging, heterogeneous subregions. We observe faster federated convergence attributable to pretrained initialization, supporting the practicality of our approach for resource‑constrained clinical deployments. Collectively, these results advance multimodal fusion in federated neuro‑oncology and provide a rigorous, large‑scale evaluation.
Supplementary Material: pdf
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 5484
Loading