Protecting Regression Models With Personalized Local Differential PrivacyDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 12 May 2023IEEE Trans. Dependable Secur. Comput. 2023Readers: Everyone
Abstract: The equation-solving model extraction attack is an intuitively simple but devastating attack to steal confidential information of regression models through a sufficient number of queries. Complete mitigation is difficult. Thus, the development of countermeasures is focused on degrading the attack effectiveness as much as possible without losing the model utilities. We investigate a novel personalized local differential privacy mechanism to defend against the attack. We obfuscate the model by adding high-dimensional Gaussian noise on model coefficients. Our solution can adaptively produce the noise to protect the model on the fly. We thoroughly evaluate the performance of our mechanisms using real-world datasets. The experiment shows that the proposed scheme outperforms the existing differential-privacy-enabled solution, i.e., 4 times more queries are required to achieve the same attack result. We also plan to publish the relevant codes to the community for further research.
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview