Secure Split Learning against Property Inference and Data Reconstruction AttacksDownload PDF

16 May 2022 (modified: 05 May 2023)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: split learning, security, inference attack, data privacy
Abstract: Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of the guest and the host, which may come from different backgrounds, holding features partitioned vertically. However, SplitNN creates a new attack surface for the adversarial participant, holding back its practical use in the real world. By investigating the adversarial effects of two highly threatening attacks, i.e., property inference and data reconstruction, adapted from security studies of federated learning, we identify the underlying vulnerability of SplitNN. To prevent potential threats and ensure learning guarantees of SplitNN, we design a privacy-preserving tunnel for information exchange between the guest and the host. The intuition behind our design is to perturb the propagation of knowledge in each direction with a controllable unified solution. To this end, we propose a new activation function named $\text{R}^3$eLU, transferring private smashed data and partial loss into randomized responses in forward and backward propagations, respectively. Moreover, we give the first attempt to achieve a fine-grained privacy budget allocation scheme for SplitNN. The analysis of privacy loss proves that our privacy-preserving SplitNN solution requires a tight privacy budget, while the experimental result shows that our solution outperforms existing solutions in attack defense and model usability.
14 Replies

Loading