Abstract: Performance microbenchmarking is essential for ensuring software quality by providing granular insights into code efficiency. While automated performance microbenchmark generation tools (e.g., ju2jmh) are proposed to alleviate practitioners from manually curating microbenchmarks, the high volume of generated benchmarks can lead to protracted benchmarking execution time, as many of the generated benchmarks are too short in nature to be valuable for evaluating performance. In this paper, we present a novel approach that optimizes microbenchmark execution through a batching strategy, i.e., grouping benchmarks with similar code coverage and treating them as a single unit to 1) reduce execution overhead and 2) reduce the bias from microbenchmarks that are too short. We evaluate the effectiveness of this enhancement across various Java projects, comparing the execution times of clustered and individual micro benchmarks. Our findings demonstrate substantial improvements in execution efficiency, reducing execution time by up to 89.81% while preserving high microbenchmark stability.
Loading