Keywords: Split Learning, Discrete Periodic Transform, Collaborative Framework
Abstract: Split Learning (SL) partitions a deep neural network between client and server, enabling collaborative training while reducing the client’s computational load. However, it has been shown that the intermediate activations (“smashed data”) of the client’s model, shared with the server, leak sensitive information. Existing defenses are limited: many assume only passive adversaries, degrade accuracy significantly, or have already been bypassed by recent reconstruction attacks. In this work, we propose SEAL, a client-side obfuscation framework for SL. By applying secret, client-specific periodic transforms, SEAL creates an exponentially large, unsearchable function space that prevents reconstruction of smashed data. We rigorously characterize the class of periodic functions that yield orthogonal, reversible, and numerically stable transforms, ensuring both security and utility preservation. Extensive experiments on image and text benchmarks show that SEAL withstands state-of-the-art reconstruction attacks while maintaining high accuracy.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 25035
Loading