Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Key Protected Classification for GAN Attack Resilient Collaborative Learning
Nov 07, 2017 (modified: Nov 07, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Large-scale publicly available datasets accelerate deep learning studies. However they are not always available for all domains, especially for the ones in which sensitive information of subjects must be kept private. Collaborative learning techniques provide a privacy-preserving solution for the data owners who do not want to directly share their datasets with each other due to privacy concerns. Existing collaborative learning techniques (with the integration of the differential privacy concept) are shown to be resilient against a passive adversary which tries to infer the training data only from the resulting model parameters. However, recently, it has been shown that the existing collaborative learning techniques are vulnerable to an active adversary that runs a GAN attack during the learning phase. In this work, we propose a novel key-based collaborative leaning technique that is resilient against the GAN attack. We propose a collaborative learning technique in which class scores of each participant are protected by class-specific keys. We also introduce fixed neural network components into the proposed model in order to use high dimensional keys (for higher robustness) without increasing the model complexity. Via experiments using two popular datasets MNIST and AT\&T Olivetti Faces, we show the robustness of the proposed technique against the GAN attack. To the best of our knowledge, the proposed technique is the first collaborative leaning technique that is resilient against an active adversary.
Keywords:privacy preserving deep learning, collaborative learning, adversarial attack
Enter your feedback below and we'll get back to you as soon as possible.