Personalizing Federated Learning Guided by Site-aggregated Representation for Multi-site One-shot Medical Image Segmentation
Abstract: Personalized federated learning for medical image segmentation enables collaborative model training across multiple clinical sites without sharing patient data, by synchronizing a subset of global model parameters while retaining others for local adaptation. However, previous methods adopt paired labeled images to train the model, which is hard to apply in real scenarios due to time-consuming medical experts on manual annotation. Additionally, these methods focus on local parameter learning ignoring inter-site consistencies during local training. To address these challenges, we propose a personalizing Federated framework guided by Site-aggregated Representation (FedSR) for multi-site one-shot medical image segmentation, which exploits the site-invariant latent information to boost segmentation performance. Specifically, we propose to learn an omniscient encoder by federated learning, which can not only model the data distribution between multisite datasets but also adapt the multi-task in an efficient way. With the learned robust representation, we further propose to learn site-aggregated representation between multi-site data by mutual information maximization, and then adopt such site-aggregated latent representation to guide the personalized dual-task head decoder. Extensive experiments conducted on two MIS tasks demonstrate that the proposed FedSR outperforms state-of-the-art one-shot MIS methods on segmentation.
Loading