Soteria: In search of efficient neural networks for private inferenceDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SubmittedReaders: Everyone
Abstract: In the context of ML as a service, our objective is to protect the confidentiality of the users’ queries and the server's model parameters, with modest computation and communication overhead. Prior solutions primarily propose fine-tuning cryptographic methods to make them efficient for known fixed model architectures. The drawback with this line of approach is that the model itself is never designed to efficiently operate with existing cryptographic computations. We observe that the network architecture, internal functions, and parameters of a model, which are all chosen during training, significantly influence the computation and communication overhead of a cryptographic method, during inference.Thus, we propose SOTERIA — a training method to construct model architectures that are by-design efficient for private inference. We use neural architecture search algorithms with the dual objective of optimizing the accuracy of the model and the overhead of using cryptographic primitives for secure inference. Given the flexibility of modifying a model during training, we find accurate models that are also efficient for private computation. We select garbled circuits as our underlying cryptographic primitive, due to their expressiveness and efficiency. We empirically evaluate SOTERIA on MNIST and CIFAR10 datasets, to compare with the prior work on secure inference. Our results confirm that SOTERIA is indeed effective in balancing performance and accuracy.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:2007.12934/code)
4 Replies

Loading