Layer-Wise Personalized Federated Learning with Hypernetwork

Published: 2023, Last Modified: 08 Jan 2026Neural Process. Lett. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning is a machine learning paradigm in which decentralized client devices collaboratively train shared model under the coordinating of a central server without sharing local data. Data heterogeneity is one of the major challenges faced by federated learning, which leading to lagged convergence and decreased accuracy of the global model. Moreover, existing personalized federated learning (PFL) methods suffer from limited personalization capabilities and slow model convergence. To address these limitations, we propose a novel Hypernetwork based layer-wise PFL algorithm, pFedLHN. The Hypernetwork located on the server is collaboratively trained by the clients to generate unique personalized models based on their local data distribution. Unlike previous PFL algorithms that focus on optimizing and personalizing the whole shared model, we implement layer-wise modules with parameter sharing mechanisms in the Hypernetwork. This approach allows the Hypernetwork to generate parameters independently for each layer of the client’s network, enabling the layer-wise modules to focus on learning the different utilities in the network. Consequently, this enhances the model’s personalized ability and training efficiency. We conducted several experiments on various PFL benchmark datasets, demonstrating that pFedLHN outperforms previous methods and has better generalization ability on clients that did not participate in the training phase.
Loading