FLAME: Federated Learning With Masked Autoencoders and Mean-Prototypes Embedding for Sparsely Labeled Medical Images
Abstract: Federated Learning (FL) has emerged as a promising paradigm for collaborative and privacy-preserving model training in medical imaging. However, FL faces major challenges such as data heterogeneity among hospitals or institutions and scarcity of labeled data, particularly in healthcare applications. To address these challenges, we propose FLAME (Federated Learning with masked Autoencoders and Mean-prototypes Embedding) for sparsely labeled medical images. FLAME implements an integrated learning framework where a masked autoencoder (MAE) learns robust feature representations through reconstruction-based self-supervision, while a Prototypical Network head guides these representations to enhance class separation through mean-prototype embeddings. This learning mechanism enables the encoder to simultaneously capture rich contextual features from unlabeled data while learning discriminative boundaries among classes using limited labeled samples. Our experiments on diverse medical imaging tasks, including PathMNIST, Dermnet, COVID-19 chest X-ray dataset, and Skin-FL, demonstrate FLAME's superior performance over existing FL techniques. The framework shows significant improvements in both classification accuracy and convergence speed, while maintaining privacy and reducing dependence on labeled data. Most importantly, the proposed integration of MAE and Prototypical Network opens new possibilities for the domains that suffer from label scarcity and data heterogeneity, making it particularly valuable for applications like medical diagnostics.
External IDs:doi:10.1109/tetci.2025.3569759
Loading