Keywords: Privacy-Preserving, Domain Shifting, Input Obfuscation
TL;DR: The paper proposes a method to enhance privacy in cloud-based deep learning services by obfuscating inputs through mapping them to a broader space outside typical input domains, ensuring model recognition while keeping sensitive data secure.
Abstract: In the era of cloud-based deep learning (DL) services, data privacy has become a critical concern, prompting some organizations to restrict the use of online AI services. This work addresses this issue by introducing a privacy-preserving method for DL model queries through domain shifting in the input space. We develop an encoder that strategically transforms inputs into a different domain within the same space, ensuring that the original inputs remain private by presenting only the obfuscated versions to the DL model. A decoder then recovers the correct output from the model's predictions. Our method keeps the authentic input and output data secure on the local system, preventing unauthorized access by third parties who only encounter the obfuscated data. Comprehensive evaluations across various oracle models and datasets demonstrate that our approach preserves privacy with minimal impact on classification performance.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5386
Loading