Privacy-Preserving Split Learning via Patch Shuffling over TransformersDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 12 May 2023ICDM 2022Readers: Everyone
Abstract: We focus on the privacy-preserving problem in split learning in this work. In vanilla split learning, a neural network is split to different devices to be trained, risking leaking the private training data in the process. We novelly propose a patch shuffling scheme on transformers to preserve training data privacy, yet without degrading overall model performance. Formal privacy guarantees are provided and we further introduce the batch shuffling and the spectral shuffling schemes to enhance the guarantee. We show through experiments that our methods successfully defend the black-box, white-box, and adaptive attacks in split learning, with superior performance over baselines, and are efficient to deploy with negligible overhead compared to the vanilla split learning.
0 Replies

Loading