Just a Simple Transformation is Enough for Data Protection in Split Learning

ICLR 2026 Conference Submission24818 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Privacy, Split Learning, Feature Reconstruction attacks
Abstract: Split Learning (SL) aims to enable collaborative training of deep learning models while maintaining privacy protection. However, the SL procedure still has components that are vulnerable to attacks by malicious parties. In our work, we consider feature reconstruction attacks --- a common risk targeting input data compromise. We theoretically claim that feature reconstruction attacks cannot succeed without knowledge of the prior distribution on data. Consequently, we demonstrate that even simple model architecture transformations can significantly impact the protection of input data during SL. Confirming these findings with experimental results, we show that MLP-based models are resistant to state-of-the-art feature reconstruction attacks.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 24818
Loading