Keywords: Neural Networks, Efficient Computing Methods, Adversarial Vulnerability
TL;DR: We study the security implications of publishing on-device models constructed by efficient deep learning mechanisms.
Abstract: Recent increases in the computational demands of deep neural networks (DNNs) have sparked interest in efficient deep learning mechanisms, e.g., quantization or pruning. These mechanisms enable the construction of a small, efficient version of commercial-scale models with comparable accuracy, accelerating their deployment to resource-constrained devices.
In this paper, we study the security considerations of publishing on-device variants of large-scale models. We first show that an adversary can exploit on-device models to make attacking the large model easier. In evaluations across 19 DNNs, by exploiting the published on-device models as a prior, the adversarial vulnerability of the original commercial-scale models increases by up to 100x. We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase. Based on the insights, we propose a defense, similarity-unpairing, that fine-tunes smaller models with the objective of reducing the similarities. We evaluated our defense on all the 19 DNNs and found that it reduces the transferability up to 90% and the number of queries required by a factor of 10-100x. Our results suggest that further research is needed on the security (or even privacy) threats caused by publishing those efficient siblings.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/publishing-efficient-on-device-models/code)
0 Replies
Loading