Consensus Optimization at Representation: Improving Personalized Federated Learning via Data-Centric Regularization

Published: 28 Oct 2023, Last Modified: 13 Dec 2023FL@FM-NeurIPS’23 PosterEveryoneRevisionsBibTeX
Student Author Indication: Yes
Keywords: Personalized federated Learning, Consensus optimization, Representation learning
TL;DR: We proposed and experimentally verified a personalized federated learning algorithm. The main step in the algorithm is to introduce a data-centric regularization term to force consensus at the representation level.
Abstract: Federated learning is a large scale machine learning training paradigm where data is distributed across clients, and can be highly heterogeneous from one client to another. To ensure personalization in client models, and at the same time to ensure that the local models have enough commonality (i.e., prevent ``client-drift''), it has been recently proposed to cast the federated learning problem as a consensus optimization problem, where local models are trained on local data, but are forced to be similar via a regularization term. In this paper we propose an improved federated learning algorithm, where we ensure consensus optimization at the representation part of each local client, and not on whole local models. This algorithm naturally takes into account that today's deep networks are often partitioned into a feature extraction part (representation) and a prediction part. Our algorithm ensures greater flexibility compared to previous works on exact shared representation in highly heterogeneous settings, as it has been seen that the representation part can differ substantially with data distribution. We validate its good performance experimentally in standard datasets.
Submission Number: 26
Loading