Abstract: Personalized Federated Learning (PFL) has gained significant attention for its ability to handle heterogeneous data effectively. Parameter decoupling is a typical approach to PFL. It decouples the model into a feature extractor and a classifier head, where the feature extractor is trained collaboratively to learn a common representation and the classifier head is personalized for local data. Since local training only learns personalized feature information and ignores global information, the generalization ability of the feature extractor is limited. To improve the performance of the local model, a feasible approach is to make the local feature extractor more generalized. However, prior work requires the transmission of additional feature data beyond the transmission of model parameters, which leads to privacy leakage and higher communication overhead. To address these shortcomings, we propose a PFL algorithm with feature alignment via knowledge distillation, named PFAKD. PFAKD enhances the training of local feature extractors by explicitly aligning each sample’s local features with global features, providing more detailed guidance. Meanwhile, it avoids additional communication overhead and the risk of privacy leakage. We conduct extensive experiments in heterogeneous data scenarios. PFAKD outperforms other state-of-the-art methods by up to 4.35% in terms of model accuracy. Our code is available at https://github.com/fei0829/PFAKD.
Loading