Abstract: Deep learning (DL) systems have exhibited remarkable capabilities in various domains, such as image classification, natural language processing, and recommender systems, thereby establishing themselves as significant contributors to the advancement of software intelligence. Nevertheless, in domains emphasizing security assurance, the reliability and stability of deep learning systems necessitate thorough testing prior to practical implementation. Given the increasing demand for high-quality assurance of DL systems, the field of DL testing has gained significant traction. Researchers have adapted testing techniques and criteria from traditional software testing to deep neural networks, yielding results that enhance the overall security of DL technology. To address the challenge of enriching test samples in DL testing systems and resolving the issue of unintelligibility in samples generated by multiple mutations, we propose an innovative solution called DeepSensitive. DeepSensitive functions as a fuzzy testing tool, leveraging DL interpretable algorithms to identify sensitive neurons within the input layer via the DeepLIFT algorithm. Employing a fuzzy approach, DeepSensitive perturbs these sensitive neurons to generate novel test samples. We conducted evaluations of DeepSensitive using various mainstream image processing datasets and deep learning models, thereby demonstrating its efficient and intuitive capacity for generating test samples.
Loading