Secure Quantized Training for Deep LearningDownload PDF

21 May 2021 (modified: 25 Nov 2024)NeurIPS 2021 SubmittedReaders: Everyone
Keywords: Federated learning, secure multi-party computation, quantization, deep learning
TL;DR: We present an implementation of deep learning training in secure multi-party computation.
Abstract: We have implemented training of neural networks in secure multi-party computation (MPC) using quantization commonly used in said setting. To the best of our knowledge, we are the first to present training of MNIST purely implemented in MPC that comes within one percent of accuracy of training using plaintext computation. We found that training with MPC is possible, but it takes more epochs and achieves a lower accuracy than the usual CPU/GPU computation. More concretely, we have trained a network with two convolution and two dense layers to 98.5% accuracy in 150 epochs. This took a day in our MPC implementation.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/secure-quantized-training-for-deep-learning/code)
9 Replies

Loading