Abstract: Machine learning models hosted in a cloud service are increasingly
popular but risk privacy: clients sending prediction requests to the
service need to disclose potentially sensitive information. In this
paper, we explore the problem of privacy-preserving predictions:
after each prediction, the server learns nothing about clients’ input
and clients learn nothing about the model.
We present MiniONN, the first approach for transforming an
existing neural network to an oblivious neural network supporting
privacy-preserving predictions with reasonable efficiency. Unlike
prior work, MiniONN requires no change to how models are trained.
To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN
outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by
transforming several typical neural network models trained from
standard datasets.
0 Replies
Loading