Harnessing Heterogeneous Statistical Strength for Personalized Federated Learning via Hierarchical Bayesian Inference

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Personalized federated learning (PFL) based on Bayesian approach tackle the challenges from statistical heterogeneity of client data by computing a personalized posterior distribution over the parameters of each client's local model and constructing a global distribution by aggregating the parameters of these personalized posteriors. However, the heuristic aggregation methods introduce strong biases and result in global models with poor generalization. We thus propose a novel hierarchical Bayesian inference framework for PFL by specifying a conjugate hyper-prior over the parameters of the personalized posteriors. This allows us to jointly compute a global posterior distribution for aggregation and the personalized ones at local level. This hierarchical Bayesian inference framework achieves elegant balance between local personalization and global model robustness. Extensive empirical study shows that by effectively sharing the heterogeneous statistical strength across the local models while retaining their distinctive characteristics, our framework yields state-of-the-art performance. We also show that existing Bayesian PFLs are special cases of our framework.
Lay Summary: Federated Learning (FL) is a way to train machine learning models across many users’ devices, such as smartphones or laptops, without needing to share their private data with anyone. Each user trains a personalized model locally using only their private data. These personalized models are then combined to build a global model. However, since users’ data can vary significantly, combining all these personalized models into a reliable global model while preserving the unique characteristics of each user’s data remains a major challenge. In our work, instead of simply averaging the personalized models to form the global model, we model the relationship between the personalized and global models using a hierarchical Bayesian framework. This framework allows us to jointly find the distribution parameters for both the global and personalized models, achieving an elegant balance between local personalization and global model robustness. We show that many existing personalized FL methods are special cases of our approach. By framing personalized federated learning using a principled Bayesian framework, we offer a fresh perspective on how to improve FL systems. Our work has the potential to make a meaningful impact in fields like healthcare and finance, where both model performance and data privacy are critically important.
Link To Code: https://github.com/mahendrathapa/pFedHB
Primary Area: Deep Learning
Keywords: Federated Learning, Bayesian Inference, Variational Inference
Submission Number: 12131
Loading