FedLDA: Personalized Federated Learning Through Collaborative Linear Discriminant Analysis

Published: 28 Oct 2023, Last Modified: 15 Dec 2023FL@FM-NeurIPS’23 PosterEveryoneRevisionsBibTeX
Student Author Indication: Yes
Keywords: "Federated learning", "Personalized federated learning"
TL;DR: Personalized federated learning via distributed feature density estimation.
Abstract: Data heterogeneity poses a significant challenge to federated learning. Observing the universality of neural networks in approximating the ground-truth, one emerging perspective is to train personalized models via learning a shared representation coupled with customized classifiers for each client. To the best of our knowledge, except for the concurrent work FedPAC, individual classifiers in most existing works only utilize local datasets, which may result in poor generalization. In this work, we propose FedLDA which enables federation in training classifiers by performing collaborative Linear Discriminant Analysis (LDA) on top of the latent shared representation. Our algorithm design is motivated by the observation that upon network initialization the extracted features are highly Gaussian, and client LDA models may benefit from distributed estimation of the Gaussian parameters. To support the high-dimension, low-sample scenario often encountered in PFL, we utilize a momentum update of the Gaussian parameters and employ $\ell_1$ regularization of local covariances. Our numerical results show that, surprisingly, in contrast to multiple state-of-the-art methods, our FedLDA is capable of maintaining the initial Gaussianity. More importantly, through empirical study, we demonstrate that our FedLDA method leads to improved generalization than state-of-the-art algorithms. Compared with FedPAC, our method is communication-efficient and does not require the availability of a validation dataset.
Submission Number: 42
Loading