Generative Adversarial Federated ModelDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: As an emerging technique, vertical federated learning collaborates with different data sources to jointly train a machine learning model without data exchange. However, federated learning is computationally expensive and inefficient in modeling due to complex encryption algorithms or secure computation protocols. Split learning offers a solution to the high computational cost and low modeling efficiency of traditional federated learning using encryption algorithms or secure computation protocols. However, vanilla split learning still suffers privacy leakage, especially the label leakage from the active party. Here, we propose the Generative Adversarial Federated Model (GAFM) built upon the vanilla split learning framework and the Generative Adversarial Network (GAN) for improved label privacy protection against commonly used attacks. In our empirical studies on two publicly available datasets, GAFM showed significant performance improvement for prediction and label privacy protection compared to existing models, including Marvell and SplitNN, which is an application of split learning to neural networks. We provide intuition on why GAFM can improve over SplitNN and Marvell and demonstrate that GAFM offers label protection through gradient perturbation compared to SplitNN.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
Supplementary Material: zip
7 Replies

Loading