Abstract: In the past several years, machine learning, especially deep learning, has achieved remarkable success in various fields. However, it has been shown recently that machine learning algorithms are vulnerable to well-crafted attacks. For instance, poisoning attack is effective in manipulating the results of a predictive model by deliberately contaminating the training data. In this paper, we investigate the implication of network pruning on the resilience against poisoning attacks. Our experimental results show that pruning can effectively increase the difficulty of poisoning attack, possibly due to the reduced degrees of freedom in the pruned network. For example, in order to degrade the test accuracy below 60% for the MNIST-1-7 dataset, only less than 10 retraining epochs with poisoning data are needed for the original network, while about 16 and 40 epochs are required for the 90% and 99% pruned networks, respectively.
0 Replies
Loading