Attacking and Securing Masking Scheme for TEE-Based Model Protection

ICLR 2026 Conference Submission24878 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: TEE-Based Protection Schemes, ReLU-Based Neural Networks, Masking Schemes, Differential Attack
Abstract: Deep learning (DL) models are being increasingly adopted across a wide range of applications. Many inference models are deployed on edge devices to enable efficient and low-latency computation. However, such deployment exposes security risks, including the potential leakage of model parameters. To address these security risks, several researchers have proposed protection schemes for deployed models based on Trusted Execution Environments (TEEs). In this paper, we analyze a common weakness of existing TEE-based protection schemes, namely the insecurity of the masking mechanism. Existing masking schemes not only provide limited security guarantees but also incur high computational and storage complexity. Motivated by these inherent weaknesses, we develop a targeted differential attack that can accurately recover the parameters of linear layers in ReLU-based neural networks. Furthermore, we propose an improved masking scheme that achieves higher security and efficiency by generating substantially more mask combinations under the same computational cost, thereby considerably strengthening TEE-based model protection.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 24878
Loading