Trained-MPC: A Private Inference by Training-Based Multiparty ComputationDownload PDF

Published: 12 May 2023, Last Modified: 22 May 2023MLSys-RCLWN 2023Readers: Everyone
Keywords: Privacy Preserving Machine Learning, Private Edge Computing, Private Inference
TL;DR: An approach to inference privacy in deep learning
Abstract: How can we perform inference on data using cloud servers without leaking any information to them? The answer lies in Trained-MPC, an innovative approach to inference privacy that can be applied to deep learning models. It relies on a cluster of servers, each running a learning model, which are fed with the client data added with strong noise. The noise is independent of user data, but dependent across the servers. The variance of the noise is set to be large enough to make the information leakage to the servers negligible. The dependency among the noise of the queries allows the parameters of the models running on different servers to be trained such that the client can mitigate the contribution of the noises by combining the outputs of the servers, and recover the final result with high accuracy and with a minor computational effort. In other words, in the proposed method, we develop a multiparty computation (MPC) by training for a specific inference task while avoiding the extensive communication overhead that MPC entails. Simulation results demonstrate Trained-MPC resolves the tension between privacy and accuracy while avoiding the computational and communication load needed in cryptography schemes.
2 Replies

Loading