Abstract: Benchmarking is crucial for developing new algorithms. This also applies to solvers for the propositional satisfiability (SAT) problem. Benchmark selection is about choosing representative problem instances that reliably discriminate solvers based on their runtime. In this paper, we present a dynamic benchmark selection approach based on active learning. Our approach estimates the rank of a new solver among its competitors, striving to minimize benchmarking runtime but maximize ranking accuracy. Instead of using real-valued solver runtimes, our approach works with discretized runtime labels, which yielded better solver rank predictions. We evaluated this approach on the Anniversary Track dataset from the SAT Competition 2022. Our benchmark selection approach can predict the rank of a new solver after approximately 10 % of the time it would take to run the solver on all instances of this dataset, with a prediction accuracy of approximately 92 %. Additionally, we discuss the importance of instance families in the selection process. In conclusion, our tool offers a reliable method for solver engineers to assess a new solver’s performance efficiently.
Loading