Ensembler: Combating model inversion attacks using model ensemble during collaborative inference

18 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Privacy-preserving machine learning (PPML), collaborative inference, deep learning, machine learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We propose a new framework based on model ensemble to enhance privacy protection under collaborative inference.
Abstract: Deep learning models have exhibited remarkable performance across various domains. Nevertheless, the burgeoning model sizes compel edge devices to offload a significant portion of the inference process to the cloud. While this practice offers numerous advantages, it also raises critical concerns regarding user data privacy. In scenarios where the cloud server's trustworthiness is in question, the need for a practical and adaptable method to safeguard data privacy becomes imperative. In this paper, we introduce $\textit{Ensembler}$ an extensible framework designed to substantially increase the difficulty of conducting model inversion attacks for adversarial parties. $\textit{Ensembler}$ leverages model ensembling on the adversarial server, running in parallel with existing approaches that introduce perturbations to sensitive data at different stages of the inference pipeline. Our experiments demonstrate that when combined with even basic Gaussian noise, $\textit{Ensembler}$ can effectively shield images from reconstruction attacks, achieving recognition levels that fall below human performance in some strict settings, significantly outperforming baseline methods lacking the $\textit{Ensembler}$ framework.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1393
Loading