DiBB: distributing black-box optimization

Published: 01 Jan 2022, Last Modified: 20 May 2025GECCO 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: DiBB (for Distributing Black-Box) is a meta-algorithm and framework that addresses the decades-old scalability issue of Black-Box Optimization (BBO), including Evolutionary Computation. Algorithmically, it does so by creating out-of-the-box a Partially Separable (PS) version of any existing black-box algorithm. This is done by leveraging expert knowledge about the task at hand to define blocks of parameters expected to have significant correlation, such as weights entering a same neuron/layer in a neuroevolution application. DiBB distributes the computation to a set of machines without further customization, while still retaining the advanced features of the underlying BBO algorithm, such as scale invariance and step-size adaptation, which are typically lost in recent distributed ES implementations. This is achieved by instantiating a separate instance of the underlying base algorithm for each block, running on a dedicated machine, with DiBB handling communication and constructing complete individuals for evaluation on the original task. DiBB's performance scales constantly with the number of parameter-blocks defined, which should allow for unprecedented applications on large clusters. Our reference implementation (Python, on GitHub and PyPI) demonstrates a 5x speed-up on COCO/BBOB using our new PS-CMA-ES. We also showcase a neuroevolution application (11 590 weights) on the PyBullet Walker2D with our new PS-LM-MA-ES.
Loading