Abstract: With the rapid advancements in machine learning and the widespread adoption of Model-as-a-Service (MaaS) platforms, there has been significant attention on convolutional neural network (CNN) inference services. However, traditional inference services over plaintext data and models are susceptible to the risks of data and model leakage. Although several privacy-preserving CNN inference schemes utilizing trusted execution environment (TEE) and cryptography have been proposed, their security models and performance still have limitations in some scenarios. Aiming at the above challenges, we present an oblivious neural network prediction scheme with semi-honest TEE, namely ToNN, which ensures the security of users’ inputs, outputs, and the model itself. Specifically, based on the limited memory of the TEE, we design secure protocols to perform CNN calculations securely and efficiently, which are friendly to support the single instruction multiple data technique. Additionally, we propose a look-up-table method to optimize the convolution and pooling layers calculations. A detailed security analysis under the simulation-based real/ideal worlds model shows that ToNN can achieve the desired security. Extensive simulation results further demonstrate that ToNN can improve the performance of linear calculations by $\textbf {4.86}\times $ and non-linear calculation by $\textbf {37.68}\times $ , and can be implemented effectively with low computation and communication costs.
Loading