TL;DR: We propose a general mechanism to create new benchmark functions via affine combination of existing problems, which allows us to test the generalzibility of autoML methods in the context of numerical optimization.
Abstract: Extending a recent suggestion to generate new instances for numerical black-box optimization benchmarking by interpolating pairs of the well-established BBOB functions from the COmparing COntinuous Optimizers (COCO) platform, we propose in this work a further generalization that allows multiple affine combinations of the original instances and arbitrarily chosen locations of the global optima. We demonstrate that the MA-BBOB generator can help fill the instance space, while overall patterns in algorithm performance are preserved. By combining the landscape features of the problems with the performance data, we pose the question of whether these features are as useful for algorithm selection as previous studies have implied. MA-BBOB is built on the publicly available IOHprofiler platform, which facilitates standardized experimentation routines, provides access to the interactive IOHanalyzer module for performance analysis and visualization, and enables comparisons with the rich and growing data collection available for the (MA-)BBOB functions.
Keywords: Benchmarking, algorithm selection, black-box optimization, numerical optimization, function generation, instance space, exploratory landscape analysis
Abcd Fit: Benchmarks
Submission Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
CPU Hours: 0
GPU Hours: 0
TPU Hours: 0