Toward Testing Deep Learning Library via Model Fuzzing

Published: 20 Jun 2023, Last Modified: 07 Aug 2023AdvML-Frontiers 2023EveryoneRevisionsBibTeX
Keywords: Model Fuzzing, Deep Learning Library, Framework Testing
Abstract: The increasing adoption of deep learning (DL) technologies in safety-critical industries has brought about a corresponding rise in security challenges. While the security of DL frameworks (Tensorflow, Pytorch, PaddlePaddle), which serve as the foundation of various DL models, has not garnered the attention they rightfully deserve. The vulnerabilities of DL frameworks can cause significant security risks such as model reliability and data leakage. In this research project, we address this challenge by employing a specifically designed model fuzzing method. Firstly, we generate diverse models to test library implementations in the training and prediction phases by optimized mutation strategies. Furthermore, we consider the seed performance score including coverage, discovery time, and mutation numbers to prioritize the selection of model seeds. Our algorithm also selects the optimal mutation strategy based on heuristics to expand inconsistencies. Finally, to evaluate the effectiveness of our scheme, we implement our test framework and conduct the experiment on existing DL frameworks. The preliminary results demonstrate that this is a promising direction.
Submission Number: 9
Loading