Scalable isolation of failure-inducing changes via version comparisonDownload PDFOpen Website

Published: 01 Jan 2013, Last Modified: 13 May 2023ISSRE (Supplemental Proceedings) 2013Readers: Everyone
Abstract: Despite of indisputable progress, automated debugging methods still face difficulties in terms of scalability and runtime efficiency. To reach large-scale projects, we propose an approach which reports small sets of suspicious code changes. Its essential strength is that size of these reports is proportional to the amount of changes between code commits, and not the total project size. In our method we combine version comparison and information on failed tests with static and dynamic analysis. We evaluate our method on real bugs from Apache Hadoop, an open source project with over 2 million LOC <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> . In 2 out of 4 cases, the set of suspects produced by our approach contains exactly the location of the defective code (and no false positives). Another defect could be pinpointed by small approach extensions. Moreover, the time overhead of our approach is moderate, namely 3-4 times the duration of a failed software test.
0 Replies

Loading