SlothBomb: Efficiency Poisoning Attack against Dynamic Neural NetworksDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Efficient ML, Poisoning Attack
Abstract: Recent increases in deploying deep neural networks (DNNs) on resource-constrained devices, combined with the observation that not all input samples require the same amount of computations, have sparked interest in input-adaptive dynamic neural networks (DyNNs). These DyNNs bring in more efficient inferences and enable deploying DNNs on resource-constrained devices e.g., mobile devices. In this work, we study a new vulnerability about DyNNs: can adversaries manipulate the DyNNs efficiency to provide a false sense of efficiency? To answer this question, we design SlothBomb, an adversarial attack that injects efficiency backdoors in DyNNs. SlothBomb can poison just a minimal percentage of DyNNs training data to inject a backdoor trigger into DyNNs. In the inference time, SlothBomb can use the backdoor trigger to slow down DyNNs and abuse the computational resources of DyNNs - an availability threat analogous to the denial-of-service attacks. We evaluate SlothBomb on three DNN backbone architectures (based on VGG16, MobileNet, and ResNet56) on two popular datasets (CIFAR-10 and Tiny ImageNet) We show that a SlothBomb reduces the efficacy of DyNNs on triggered input samples while keeping almost the same efficiency on clean samples.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
5 Replies

Loading