Training Federated GANs with Theoretical Guarantees: A Universal Aggregation ApproachDownload PDF

28 Sep 2020 (modified: 05 Mar 2021)ICLR 2021 Conference Blind SubmissionReaders: Everyone
  • Keywords: Federated Learning, GAN, Deep Learning
  • Abstract: Recently, Generative Adversarial Networks (GANs) have demonstrated their potential in federated learning, i.e., learning a centralized model from data privately hosted by multiple sites. A federated GAN jointly trains a centralized generator and multiple private discriminators hosted at different sites. A major theoretical challenge for the federated GAN is the heterogeneity of the local data distributions. Traditional approaches cannot guarantee to learn the target distribution, which is a mixture of the highly different local distributions. This paper tackles this theoretical challenge, and for the first time, provides a provably correct framework for federated GAN. We propose a new approach called Universal Aggregation, which simulates a centralized discriminator via carefully aggregating the mixture of all private discriminators. We prove that a generator trained with this simulated centralized discriminator can learn the desired target distribution. Through synthetic and real datasets, we show that our method can learn the mixture of largely different distributions, when existing federated GAN methods fail to.
  • One-sentence Summary: We design and analyze a novel framework for training GAN in a federated learning fashion
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Reviewed Version (pdf): https://openreview.net/references/pdf?id=peS5L3urcM
10 Replies

Loading