Abstract: Personalized federated learning (pFL) customizes local models to address heterogeneous data across clients. One prominent research direction in pFL is model decoupling, where the knowledge of a global model is selectively utilized to assist local model personalization. Prior studies primarily use decoupled global-model parameters to convey this selected knowledge. However, due to the task-related knowledge-mixing nature of deep learning models, using these parameters may introduce irrelevant knowledge to specific clients, impeding personalization. To address this, we propose a domain-wise knowledge decoupling approach (pFedDKD), which decouples global-model knowledge into diverse projection segments in the representation space, meeting the specific needs of clients on heterogeneous local domains. A Radon transform-based method is provided to facilitate this decoupling, enabling clients to extract relevant knowledge segments for personalization. Besides, we provide a distillation-based back-projection learning method to fuse local-model knowledge into the global model, ensuring the updated global-model knowledge remains decouplable by projection. A theoretical analysis confirms that our approach improves generalization. Extensive experiments on four datasets demonstrate that pFedDKD consistently outperforms eleven state-of-the-art baselines, achieving an average improvement of 1.21% in test accuracy over the best-performing baseline.
Loading