ExPLoit: Extracting Private Labels in Split LearningDownload PDF

24 Aug 2022 (modified: 05 May 2023)SaTML 2023Readers: Everyone
Keywords: Federated Learning, Split Learning, Data Privacy, Label Leakage
TL;DR: We propose ExPLoit -- a label leakage attack on split learning that leaks private labels with high accuracy by framing the attack as a supervised learning problem.
Abstract: Split learning is a popular technique used to perform vertical federated learning, where the goal is to jointly train a model on the private input and label data held by two parties. To preserve privacy of the input and label data, this technique uses a split model trained end-to-end, by exchanging the intermediate representations (IR) of the inputs and gradients of the IR between the two parties. We propose ExPLoit – a label-leakage attack that allows an adversarial input-owner to extract the private labels of the label-owner during split-learning. ExPLoit frames the attack as a supervised learning problem by using a novel loss function that combines gradient-matching and several regularization terms developed using key properties of the dataset and models. Our evaluations on a binary conversion prediction task and several multi-class image classification tasks show that ExPLoit can uncover the private labels with near- perfect accuracy of up to 99.53%, demonstrating that split learning provides negligible privacy benefits to the label owner. Furthermore, we evaluate the use of gradient noise to defend against ExPLoit. While this technique is effective for simpler datasets, it significantly degrades utility for datasets with higher input dimensionality. Our findings underscore the need for better privacy-preserving training techniques for vertically split data.
0 Replies

Loading