Adversarial Robustness of Pruned Neural NetworksDownload PDF

09 Feb 2018 (modified: 11 Feb 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Deep neural network pruning forms a compressed network by discarding “unimportant” weights or filters. Standard evaluation metrics have shown their remarkable speedup and prediction accuracy in test time, but their adversarial robustness remains unexplored even though it is an important security feature in deployment. We study the robustness of pruned neural networks under adversarial attacks. We discover that although pruned models maintain the original accuracy, they are more vulnerable to such attacks. We further show that adversarial training improves the robustness of pruned networks. However, it is observed there exist trade-offs among compression rate, accuracy and robustness in adversarially trained pruned neural networks. Our analysis suggests that we should pay additional attention to robustness in neural network pruning rather than just maintaining the classification accuracy.
TL;DR: We study the robustness of pruned neural networks under adversarial attacks and perform adversarial training on pruned models.
Keywords: model compression, adversarial attacks, adversarial training, neural network pruning, fast gradient descent method (FGSM), projected gradient descent (PGD)
7 Replies

Loading