Adaptive Personalized Federated Learning for Non-IID Data with Continual Distribution Shift

Published: 01 Jan 2024, Last Modified: 06 Feb 2025IWQoS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) has surged in popularity, allowing machine learning models to be collaboratively trained using decentralized client data, all while upholding privacy and security standards. However, leveraging locally-stored data introduces challenges related to data heterogeneity. While many past studies have addressed this non-IID problem, they often overlook the dynamic nature of each individual client’s data or disrupt its continuous shift. In this paper, our emphasis is on the challenges posed by temporal data distribution shift alongside non-IID data across clients, a more prevalent yet complex situation in real-world FL. We propose to analytically capture the evolving nature of each local data distribution, by modeling them as a time-varying composite of multiple latent Gaussian distributions. We then employ the expectation maximization (EM) algorithm to deduce the distribution model parameters based on the prevailing observed training data, ensuring that the learned mixture proportion weights mirror a consistent trajectory. Additionally, by embedding an adaptive data partitioning method into the EM algorithm and using each partition to train a distinct sub-model, we realize an intuitive and novel personalized FL paradigm. This refines the FL training by exploiting the heterogeneity and temporal shifts of clients’ datasets. We derive analytical results to guarantee the convergence of our training method. Comprehensive tests across diverse datasets and distribution configurations also underscore our enhanced efficacy compared to several state-of-the-art.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview