VAWS: Vulnerability Analysis of Neural Networks using Weight Sensitivity

Published: 01 Jan 2019, Last Modified: 12 May 2025MWSCAS 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The advancement in deep learning has taken the technology world by storm in the last decade. Although, there is enormous progress made in terms of algorithm performance, the security aspect of these algorithms has not received a lot of attention from the research community. As more industries start to adopt these algorithms the issue of security is becoming even more relevant. Security vulnerabilities in machine learning (ML), especially in deep neural networks (DNN), is becoming a concern. Various techniques have been proposed, including data manipulations and model stealing. However, most of them are focused on ML algorithms and target threat models that require access to training dataset. In this paper, we present a methodology that analyzes the DNN weight parameters under the threat model that assumes the attacker has the access to the weight memory only. This analysis is then used to develop an attack that manipulates weight parameters with respect to their sensitivity. To evaluate this attack, we implemented our methodology on a MLP trained on IRIS dataset and LeNet (DNN architecture) trained on MNIST dataset. Our experimental results demonstrate that alteration of model parameters results in subtle accuracy drop of the model. Depending on the applications such subtle changes can cause significant system malfunction or disruption, for example in vision-based industrial applications. Our results show that using our methodology a subtle accuracy drop can be achieved in a reasonable amount of time with very few parameter changes.
Loading