Abstract: Deep learning models have achieved unprecedented success in various domains, such as healthcare and finance. However, deploying model inference in real-world applications, where data is distributed among multiple entities, poses significant privacy concerns. Existing secure model inference work has limitations in computational overhead and scalability, especially when dealing with complex models and multiple parties with vertically partitioned data. In this work, we design and implement an efficient and scalable secure inference framework for vertically partitioned data, supporting execution with a large number of parties. Our work considers a semi-honest setting with all-but-one corruptions. The core of our framework is a series of secure and efficient protocols for complex non-linear functions of the model inference, such as ReLU and Maxpool. These protocols are designed based on secure multi-party computation preliminaries, significantly enhancing efficiency while maintaining rigorous security guarantees. We conduct comprehensive experiments to evaluate the performance of our framework. Experimental results show that SecInfer substantially improves the communication and computation performance of secure naive inference works by up to 3.71 × and 3.42 ×, respectively.
External IDs:dblp:conf/icc/DengLCHXHZJ25
Loading