Keywords: Federated Learning, Multimodal Learning, Missing Modalities, Uncertainty Quantification, Feature Imputation, Medical Imaging
TL;DR: We introduce P-FIN, a probabilistic framework for federated medical imaging that uses calibrated uncertainty to gate unreliable feature imputations and weight client contributions, achieving a +5.36% AUC gain in settings with missing modalities.
Abstract: Multimodal federated learning enables privacy-preserving collaborative model training across healthcare institutions. However, a fundamental challenge arises from modality heterogeneity: many clinical sites possess only a subset of modalities due to resource constraints or workflow variations. Existing approaches address this through feature imputation networks that synthesize missing modality representations, yet these methods produce point estimates without reliability measures, forcing downstream classifiers to treat all imputed features as equally trustworthy. In safety-critical medical applications, this limitation poses significant risks. We propose the Probabilistic Feature Imputation Network (P-FIN), which outputs calibrated uncertainty estimates alongside imputed features. This uncertainty is leveraged at two levels: (1) locally, through sigmoid gating that attenuates unreliable feature dimensions before classification, and (2) globally, through Fed-UQ-Avg, an aggregation strategy that prioritizes updates from clients with reliable imputation. Experiments on federated chest X-ray classification using CheXpert, NIH Open-I, and PadChest demonstrate consistent improvements over deterministic baselines, with +5.36% AUC gain in the most challenging configuration.
Primary Subject Area: Federated Learning
Secondary Subject Area: Uncertainty Estimation
Registration Requirement: Yes
Reproducibility: https://github.com/NafisFuadShahid/PFIN-UQAVG
Visa & Travel: Yes
Read CFP & Author Instructions: Yes
Originality Policy: Yes
Single-blind & Not Under Review Elsewhere: Yes
LLM Policy: Yes
Submission Number: 335
Loading