Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
SGD on Random Mixtures: Private Machine Learning under Data Breach Threats
Kangwook Lee, Kyungmin Lee, Hoon Kim, Changho Suh, Kannan Ramchandran
Feb 12, 2018 (modified: Jun 04, 2018)ICLR 2018 Workshop Submissionreaders: everyoneShow Bibtex
Abstract:We propose Stochastic Gradient Descent on Random Mixtures (SGDRM) as a simple way of protecting data under data breach threats. We show that SGDRM converges to the globally optimal point for deep neural networks with linear activations while being differentially private. We also train nonlinear neural networks with private mixtures as the training data, proving the practicality of SGDRM.
Keywords:SGD on random mixtures, SGDRM, differential privacy
TL;DR:SGDRM is the SGD algorithm run on random mixtures; it is differentially private and has convergence guarantees.
Enter your feedback below and we'll get back to you as soon as possible.