MMOS: Multi-Staged Mutation Operator Scheduling for Deep Learning Library TestingDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 17 May 2023GLOBECOM 2022Readers: Everyone
Abstract: The rapid development of deep learning (DL) technology has made deep learning libraries such as Tensor Flow widely used in practice. However, the complexity of DL libraries inevitably leads to multiple vulnerabilities. Recently, research on DL library testing generally designs a variety of mutation operators to generate new models from the perspective of model mutation. However, we find these research efforts do not consider that different mutation operators have different vulnerability mining efficiencies, thus treating all mutation operators equally can lead to inefficiency. Based on the above observation, we design a novel mutation operator scheduling strategy to improve the efficiency of vulnerability mining in the DL library, including the multi-staged mutation operator selection strategy and mutation operator energy allocation strategy. To evaluate the efficiency of our work, we implement a prototype called MMOS. The results show that MMOS finds six more crash bugs, two more NaN bugs and one more inconsistency bug in four widely used DL libraries including TensorFlow, Theano, CNTK and MXNet compared with the random strategy. Among them, MMOS finds 5 previously unknown vulnerabilities in MXNet and 1 in CNTK. Moreover, MMOS outperforms LEMON in vulnerability mining of DL libraries. In total, MMOS finds 13 more vulnerabilities than LEMON.
0 Replies

Loading