Enhancing Neuron Coverage of DNN Models for Adversarial Testing

Published: 01 Jan 2024, Last Modified: 14 May 2025ISSRE (Workshops) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks (DNN) are vulnerable to adversarial attacks, which raises concerns about the safety of deep learning systems. In recent years, numerous adversarial algorithms have been proposed to test DNN models. However, the commonly used algorithms pay little attention to coverage-related metrics and can not guarantee the test adequacy of DNN models. This paper proposes NC-FGSM, an adversarial algorithm which actively increase the neuron coverage of generated samples while maintaining the adversarial capabilities. The effectiveness of this algorithm is validated and compared with other methods for improving neuron coverage. The experiment shows that NC-FGSM performs the best when considering both attack success rate and neuron coverage improvement, which is beneficial for improving the adequacy of adversarial testing in DNN models and ensuring the safety of deep learning systems.
Loading