Revisiting Hypernetwork in Model-Heterogeneous Personalized Federated Learning

ICLR 2026 Conference Submission15084 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Hypernetwork, Personalized Federated Learning
TL;DR: We utilize hypernetwork with our designed customized embedding vectors, shared heads and a plug-in lightweight global model to introduce two novel methods with enhanced performance in personalized federated learning.
Abstract: Most recent personalized federated learning research focuses on heterogeneous models among clients. However, the current methods rely on external data, model decoupling, and partial learning, which makes them sensitive to the settings. In contrast, we revisit hypernetworks and leverage their strong generalization ability to propose the first practical method for personalized federated learning. We first propose a **m**odel-**h**eterogeneous **p**ersonalized **f**ederated learning framework based on **h**ypernetworks, **MH-pFedHN**, which quantifies clients with different architectures and generates client-specific model parameters using our designed customized embedding vectors through a server-side hypernetwork. Besides a feature extractor, our hypernetwork consists of multiple heads, where clients with similar parameter sizes use the same number of customized embedding vectors and share the same head. Thus, our MH-pFedHN enables knowledge sharing across different architectures and reduces the computation of parameter generation. To push the performance limits, we further introduce a plug-in component, a lightweight but effective global model, to enhance the learning and generalization capabilities, named **MH-pFedHNGD**. Our framework does not use external data and does not require the disclosure of client model architectures, effectively ensuring security and showing great potential. Experiments across various models and tasks demonstrate that our approach outperforms standard baselines and exhibits strong generalization performance. Our code is available at \url{https://anonymous.4open.science/r/MH-pFL}.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 15084
Loading