Cost-Effective Testing of a Deep Learning Model through Input ReductionDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: we propose DeepReduce, a software engineering approach to cost-effective testing of Deep Learning models.
Abstract: With the increasing adoption of Deep Learning (DL) models in various applications, testing DL models is vitally important. However, testing DL models is costly and expensive, especially when developers explore alternative designs of DL models and tune the hyperparameters. To reduce testing cost, we propose to use only a selected subset of testing data, which is small but representative enough for quick estimation of the performance of DL models. Our approach, called DeepReduce, adopts a two-phase strategy. At first, our approach selects testing data for the purpose of satisfying testing adequacy. Then, it selects more testing data in order to approximate the distribution between the whole testing data and the selected data leveraging relative entropy minimization. Experiments with various DL models and datasets show that our approach can reduce the whole testing data to 4.6\% on average, and can reliably estimate the performance of DL models. Our approach significantly outperforms the random approach, and is more stable and reliable than the state-of-the-art approach.
Code: https://github.com/DeepReduce/DeepReduce
Keywords: Software Testing, Deep Learning, Input Data Reduction
Original Pdf: pdf
8 Replies

Loading