Partial Disentanglement with Partially-Federated GANs (PaDPaF)Download PDF

Published: 16 May 2023, Last Modified: 29 Apr 2024FLSys 2023Readers: Everyone
Keywords: federated learning, generative adversarial networks, representation learning, self-supervised learning
TL;DR: We leverage federated learning for learning a client-invariant representation by: 1) federated averaging on client-invariant sub-modules only, and 2) self-supervised regularization by varying client-specific factors.
Abstract: Federated learning has become a popular machine learning paradigm with many potential real-life applications, including recommendation systems, the Internet of Things (IoT), healthcare, and self-driving cars. Though most current applications focus on classification-based tasks, learning personalized generative models remains largely unexplored, and their benefits in the heterogeneous setting still need to be better understood. This work proposes a novel architecture combining global client-agnostic and local client-specific generative models. We show that using standard techniques for training federated models, our proposed model achieves privacy and personalization that is achieved by implicitly disentangling the globally-consistent representation (i.e. content) from the client-dependent variations (i.e. style). Using such decomposition, personalized models can generate locally unseen labels while preserving the given style of the client and can predict the labels for all clients with high accuracy by training a simple linear classifier on the global content features. Furthermore, disentanglement enables other essential applications, such as data anonymization, by sharing only content. Extensive experimental evaluation corroborates our findings, and we also provide partial theoretical justifications for the proposed approach.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2212.03836/code)
0 Replies

Loading