EP-GAN: Unsupervised Federated Learning with Expectation-Propagation Prior GANDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Bayesian Deep learning, Expectation Propagation, Unsupervised Learning, Acoustic Modeling
Abstract: Generative Adversarial Networks (GANs) are overwhelming in unsupervised learning tasks due to their expressive power in modeling fine-grained data distributions. However, it is challenging for GANs to model distributions of separate non-i.i.d. data partitions as it usually adopts an over-general prior, limiting its capability in capturing the latent structure of multiple data partitions and thus leading to mode collapse. In this paper, we present a new Bayesian GAN, dubbed expectation propagation prior GAN (EP-GAN), which addresses the above challenge of modeling non-i.i.d. federated data through imposing a partition-invariant prior distribution on a Bayesian GAN. Furthermore, unlike most existing algorithms for deep-learning-based EP inference that require numerical quadrature, here we propose a closed-form solution for each update step of EP, leading to a more efficient solution for federated data modeling. Experiments on both synthetic extremely non-i.i.d. image data partitions and realistic non-i.i.d. speech recognition tasks demonstrate that our framework effectively alleviates the performance deterioration caused by non-i.i.d. data.
One-sentence Summary: This paper derive an EP prior with closed-form update rules using deep neural networks, allowing Bayesian GANs to capture latent structure over clients on non-i.i.d. cross-silo data.
Supplementary Material: zip
12 Replies

Loading