RPNet: Robust Non-Interactive Private Inference against Malicious Clients with Adversarial Attacks

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Robustness, privacy-preserving inference, fully homomorphic encryption, adversarial attack
TL;DR: This paper audits and improves the robustness of non-interactive private neural networks on encrypted Data
Abstract: The increased deployment of machine learning inference in various applications has sparked privacy concerns. In response, privacy-preserving neural network (PNet) inference protocols have been created to allow parties to perform inference without revealing their sensitive data. Despite the recent advancements in the efficiency of PNet, most current methods assume a semi-honest threat model where the data owner is honest and adheres to the protocol. However, in reality, data owners can have different motivations and act in unpredictable ways, making this assumption unrealistic. To demonstrate how a malicious client can compromise the semi-honest model, we first designed a novel inference manipulation attack against a range of state-of-the-art private inference protocols. This attack allows a malicious client to modify the model output using 3× to 8 × fewer queries relative to the current black box attacks and accommodates larger and more complex neural networks. Driven by the insights gained from our attack, we proposed and implemented RPNet, a fortified and resilient private inference protocol that can withstand malicious clients. RPNet integrates a distinctive cryptographic protocol that bolsters security by weaving encryption-compatible noise into the logits and features of private inference, thereby efficiently warding off malicious-client attacks. Our extensive experiments on various neural networks and datasets show that RPNet achieves 19 ∼ 91.9% attack success rate reduction and increases more than 10× query number 20 required by malicious client attacks.
Supplementary Material: zip
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8308
Loading